omap

package module
v1.1.39 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 28, 2026 License: MIT Imports: 4 Imported by: 0

README

OMAP a Sort Ordered map

The fastests sorted map possible for caching, time-series, and scheduling.

The omap.OrderedMap instances offer a high-performance thread-safe sorted map for Go. Optimized for O(log n) lookups and O(1) boundary inserts using pre-allocated circular slices. 2x faster for time-series and sequential data. The drivers of the design process was the creation of a very good scheduler that could also double as a ttl cache deprecation engine. Technically the omap package implements very minital btree using a slice. The btree implementation is ordered and does not allow for duplicates; The internals manage keys by splicing the internal slice. The side effect of this design results in what operates exactly like sorted map. Under spesific conditions or very large data sets, omap.SliceTree is faster on "Get" operations than the built in go map. An omap.SliceTree instance uses signifigantly less the memory than the map feature in go.

Unlike tree-based maps, omap.SliceTree and omap.CenterTree range searches use direct slice referencing, avoiding tree traversal entirely. This is in general the optimized solution for caching, and time-series maps.

Performance Matters

Performance objectives while maintinaing a sorted map:

  • Lookups for both Put and Get operations are always a fixed complexity: o(log n).
  • All iteration operations are fixed cost of o(1).
  • Finding or removing elements between 2 points is always a fixed cost of o(log(n) + log(n)).
  • Finding elements: at, before, or after a given point is always a fixed cost of o(log n)
  • Mass Removal of unordered elements that may or may not exist has a maximum complexity of o(log(n) + log(k) + k)
  • Pre-emptive but predictable growth, this is done by setting the Growth size.
  • omap.SliceTree and omap.CenterTree support tunable pre-allocation

omap.SliceTree:

  • Implements pre-allocated a sorted slice, the elements are sequential and start at 0
  • This offers the fastest possible search speed, along with the fast bulk element deletion.

omap.CenterTree

  • Implements pre-allocated a sorted slice, the elements are sequential and start at the center of the slice.
  • This offers fast search speed, along with the fastest possible range based bulk element deletion.
  • Insertion to the beginning or end of the array are done using pre-allocated memory and are o(1)

On Read For strings:

  • Small number of keys: omap.CenterTree and omap.SliceTree are slightly faster than go's interal map
  • Large number of keys: omap.CenterTree and omap.SliceTree is o(log n) faster than go's internal map

On Write for strings:

  • Best Case, omap.CenterTree twice as fast on write over go's internal map
  • Worst case, omap.CenterTree is half as fast on write as go's internal map

When Should you use omap.CenterTree in place of a map?

Any one of these is a practical use case:

  • When using strings as keys and read performance matters
  • A sorted map is required
  • Memory constrained systems
  • Fuzzy logic is required, IE the ability to find points in between keys
  • When a combination of freequent updates and searching by ranges is requried
  • Very large data sets where read speed is more important than write speed
  • Keys that can not be represented as a comparable value
  • When managing elements between ranges is required

Basic usage

The omap package provides a common interface OrderedMap implemented by the following:

Creating ThreadSafe instance Example:

	kv:=omap.NewTs[string,string](cmp.Compare)
	// Save a value
	kv.Put("Hello"," ")
	kv.Put("World","!\n")

	// Itertor
	for k,v :=range kv.All {
		fmt.Printf("%s%s",k,v)
	}

The resulting output will be:

	"Hello World!\n"

We can now make things a bit smaller by removing things by a range.

// Note, both "Sell" and "Zoo", were never added to the instance,
// but the between operation works on these keys any ways.
kv.RemoveBetween("Sell","Zoo")

// Itertor
for k,v :=range kv.All() {
   fmt.Printf("%s%s\n",k,v)
}

The resulting output will now be:

"Hello \n"

Why this works?

  • The string "Sell" comes before the string "World"
  • The string "Zoo" comes after the string "World"

How this works

The index lookup creates 2 values for each potential key:

  • Array postion, example: 0
  • Offset can be any of the following: -1,0,1

Since lookups create both an index position and offsett, it becomes possible to look for the following:

  • Elements before the array
  • Positions between elements of the array
  • Elements after the array
  • Elements to overwrite

API

Constructors

The omap package supports any key type you can provide a compare function for, but a map in go only supports a comparable key type. This means any map can be converted to an OrderedMap instance, but not every OrderedMap instance can be converted to a map.

Function Types Arguments Returns Thread Safe
omap.New K any, V any func(a, b K) int *SliceTree[K, V] false
omap.NewFromMap [K comparable, V any] map[K]V, func(a, b K) int *SliceTree[K, V] false
omap.NewSliceTree K any, V any int, func(a, b K) int *SliceTree[K, V] false
omap.NewTs K any, V any func(a, b K) int OrderedMap[K, V] true
omap.ToMap K comparable, V any OrderedMap[K, V] map[K]V false
omap.NewCenterTree K any, V any int, func(a, b K) int *CenterTree[K, V] false

As a note, any instance of SliceTree or CenterTree can create a thread safe instance of itself, by calling the s.ToTs() method. If you create a thread safe instance, you should stop using the old instance.

Example conversion from map to a thread safe OrderedMap instance:

s:=omap.NewFromMap(myMap,cb).ToTs()

To check if an instance is thread safe call the s.ThreadSafe() method.

var om OrderedMap[string,int]=New(cb)

if !om.ThreadSafe() {
	om=om.ToTs()
}

Why not always provide a thread safe instance? A thread safe instance requires mutex locking, this limits what can be done even when operations are atomic. Example: You may have a perfectly valid reason to call an iterator from within an iterator on the same instance; This cannot be done when a mutex lock is applied to an existing instance.

OrderedMap Methods

The following table provides a general overview of the methods in OrderedMap.

Method Arguments Teturn types Description
All iter.Seq2[K, V] iterator for all Keys and Values
Keys iter.Seq[K] iterator for all keys
Values iter.Seq[V] iterator for all Values
Exists key K bool true if the key was found
Contains key K bool true if the key is between both the FirstKey and LastKey
Put key K, value V int Sets the key and value pair
Get key K value V, ok bool Returned the value for the key if ok is true
GetKvSlice []KvSet[K,V] Returns the current internal slice used for read operations
Filter func(k K,v V) bool deletes the given element when the callback returns true
FilterBetween cb func(k K, v V) bool,a,b K, opt ...int like filter, but only runs between a and b, for options See
Remove key K value V, ok bool If ok is true, the returned value was removed based on the given key
RemoveAll int Clears all elements and returns how many elements were removed
MassRemove keys ...K int Tries to remove all keys provided, returns how many keys were removed
MassRemoveKV keys ...K iter.Seq2[K, V] Tries to remove all keys provided. The iterator with a copy of all key value pairs that were removed
Size int returns the number of key/value pairs in the instance
FirstKey key K, ok bool When ok is true the first key in the instance is returned
LastKey key K, ok bool When ok is true the last key in the instance is returned
Between a,b K, opt ...int total int Returns the number of elements between a and b. For options See
BetweenKV a,b K, opt ...int iter.Seq2[K, V] Returns an iterator that contains the key/value pairs between a and b. For options See
GetBetweenKvSlice a,b K, opt ...int []KvSet[K,V] Returns a slice containing elements between a and b. For options See
RemoveBetween a,b K, opt ...int int Returns the number of elements removed between a and b. For options See
RemoveBetweenKV a,b K, opt ...int iter.Seq2[K, V] Returns an iterator that contains the key/value pairs that were moved from between a and b. For options See
ThreadSafe bool Returns true if this instance is thread safe
Merge set OrderedMap[K, V] int Merges set into this instance
FastMerge set OrderedMap[K, V] int Merges set into this instance, this requires both set and the instance be sorted in the same order
SetOverwrite cb func(key K, oldValue, newValue V) Sets the callback method that fires before a value is overwritten
SetGrowth grow int Sets the internal growth value for the slice
ToTs() OrderedMap[K, V] If this instance is not contained in a thread safe wrapper, returns this instance in a thread safe wrapper, other wise returns this instance
Between Options

The following table exlains the usage and possible values for functions that support between operations.

opt[id] Options Description
0 omap.FIRST_KEY When set, the a field is ignored and s.FirstKey is used in its place
0 omap.LAST_KEY When set, the b field is ignored and s.LastKey is used in its place
0 omap.FIRST_KEY|omap.LAST_KEY This causes both a and b to be ignored

Example using s.BetweenKV:

  for k,v :=s.BetweenKV("","Tomorrow",omap.FIRSTKEY) {
    fmt.Printf("Key: [%s], Value: [%d]\n")
  }

Returns all values up to "Tomorrow".

Benchmarks

So benchmarks are always very subjective, but the real question is: what do we compare omap too? The only real answer is the native map in go. Now this is in no way a fair comparison.. The omap package can use any data set, so long as a compare function can be provided, while the map in go only needs to be optimized internally for hashing bytes, so we would expected the native map feature to faster.

Disclamier: omap.SliceTree is built around a Compare function and omap.CenterTree is optimized for: appending and prepending, this means the benchmark requires creating a proxy key that is equal to the key provided by the Cmp function. In this benchmark the key used for the map has to be generated from the base string, and the original string pointer and value then need to be saved off in an additional data structure, this gives us a like for like compare between the native go map feature and omap.SliceTree.

The following holds true for these benchmarks

  • All read/get operations are best case for go's internal map and worst case for omap.SliceTree and omap.CenterTree
  • All write operations are worst case for omap.SliceTree and best case for go's internal map and omap.CenterTree

How well does omap compare native map feature in go?:

BenchmarkNew/Native_map_Put_keys:_[1600]-10                          26880                   44712 ns/op           93056 B/op       1606 allocs/op
BenchmarkNew/Slicetree_Put_keys:_[1600]-10                           17678                   67354 ns/op           41008 B/op          2 allocs/op
BenchmarkNew/CenterTree_Put_keys:_[1600]-10                          45187                   26866 ns/op           82016 B/op          3 allocs/op
BenchmarkNew/Native_map_Get_keys:_[1600]-10                          10000                  108296 ns/op               0 B/op          0 allocs/op
BenchmarkNew/SliceTree_Get_keys:_[1600]-10                           10000                  106291 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Get_keys:_[1600]-10                          10000                  105921 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_map_Count_between:_[1600]-10                        61                17819665 ns/op           22554 B/op       2944 allocs/op
BenchmarkNew/SliceTree_Count_Between:_[1600]-10                       6261                  193573 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Count_Between:_[1600]-10                      6258                  191440 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_map_Put_keys:_[2500]-10                          15428                   77313 ns/op          169264 B/op       2510 allocs/op
BenchmarkNew/Slicetree_Put_keys:_[2500]-10                            9609                  116953 ns/op           65584 B/op          2 allocs/op
BenchmarkNew/CenterTree_Put_keys:_[2500]-10                          27031                   44275 ns/op          122976 B/op          3 allocs/op
BenchmarkNew/Native_map_Get_keys:_[2500]-10                           5223                  211555 ns/op               0 B/op          0 allocs/op
BenchmarkNew/SliceTree_Get_keys:_[2500]-10                            6735                  167211 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Get_keys:_[2500]-10                           7146                  163492 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_map_Count_between:_[2500]-10                        26                43653353 ns/op           36999 B/op       4744 allocs/op
BenchmarkNew/SliceTree_Count_Between:_[2500]-10                       4006                  280648 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Count_Between:_[2500]-10                      4016                  289124 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_map_Put_keys:_[3600]-10                           9950                  117816 ns/op          304880 B/op       3618 allocs/op
BenchmarkNew/Slicetree_Put_keys:_[3600]-10                            5779                  180327 ns/op           90160 B/op          2 allocs/op
BenchmarkNew/CenterTree_Put_keys:_[3600]-10                          20005                   59949 ns/op          180320 B/op          3 allocs/op
BenchmarkNew/Native_map_Get_keys:_[3600]-10                           2528                  415280 ns/op               0 B/op          0 allocs/op
BenchmarkNew/SliceTree_Get_keys:_[3600]-10                            4762                  246020 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Get_keys:_[3600]-10                           4780                  247967 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_map_Count_between:_[3600]-10                         9               120736810 ns/op           54710 B/op       6944 allocs/op
BenchmarkNew/SliceTree_Count_Between:_[3600]-10                       2800                  430063 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Count_Between:_[3600]-10                      2800                  432644 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_map_Put_keys:_[4900]-10                           7699                  155597 ns/op          336080 B/op       4918 allocs/op
BenchmarkNew/Slicetree_Put_keys:_[4900]-10                            3792                  294403 ns/op          122928 B/op          2 allocs/op
BenchmarkNew/CenterTree_Put_keys:_[4900]-10                          13556                   89000 ns/op          237664 B/op          3 allocs/op
BenchmarkNew/Native_map_Get_keys:_[4900]-10                            777                 1465044 ns/op               0 B/op          0 allocs/op
BenchmarkNew/SliceTree_Get_keys:_[4900]-10                            3378                  365350 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Get_keys:_[4900]-10                           3321                  361986 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_map_Count_between:_[4900]-10                         4               282676296 ns/op           75682 B/op       9544 allocs/op
BenchmarkNew/SliceTree_Count_Between:_[4900]-10                       1942                  619004 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Count_Between:_[4900]-10                      1818                  623259 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_Map_Uint64_Put_keys:_100-10                 1000000000               0.0000040 ns/op               0 B/op          0 allocs/op
BenchmarkNew/SliceTree_Map_Uint64_Put_keys:_100-10              1000000000               0.0000035 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Map_Uint64_Put_keys:_100-10             1000000000               0.0000043 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_Map_Uint64_Get_keys:_100-10                 1000000000               0.0000010 ns/op               0 B/op          0 allocs/op
BenchmarkNew/SliceTree_Map_Uint64_int_Get_keys:_100-10          1000000000               0.0000033 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Map_Uint64_int_Get_keys:_100-10         1000000000               0.0000024 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_Map_Uint64_Put_keys:_1000-10                1000000000               0.0000236 ns/op               0 B/op          0 allocs/op
BenchmarkNew/SliceTree_Map_Uint64_Put_keys:_1000-10             1000000000               0.0000186 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Map_Uint64_Put_keys:_1000-10            1000000000               0.0000100 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_Map_Uint64_Get_keys:_1000-10                1000000000               0.0000055 ns/op               0 B/op          0 allocs/op
BenchmarkNew/SliceTree_Map_Uint64_int_Get_keys:_1000-10         1000000000               0.0000309 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Map_Uint64_int_Get_keys:_1000-10        1000000000               0.0000265 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_Map_Uint64_Put_keys:_10000-10               1000000000               0.0000931 ns/op               0 B/op          0 allocs/op
BenchmarkNew/SliceTree_Map_Uint64_Put_keys:_10000-10            1000000000               0.0002200 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Map_Uint64_Put_keys:_10000-10           1000000000               0.0000737 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_Map_Uint64_Get_keys:_10000-10               1000000000               0.0000505 ns/op               0 B/op          0 allocs/op
BenchmarkNew/SliceTree_Map_Uint64_int_Get_keys:_10000-10        1000000000               0.0003788 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Map_Uint64_int_Get_keys:_10000-10       1000000000               0.0003550 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_Map_Uint8_Put_keys:_10-10                   1000000000               0.0000007 ns/op               0 B/op          0 allocs/op
BenchmarkNew/SliceTree_Map_Uint8_int_Put_keys:_10-10            1000000000               0.0000008 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Map_Uint8_int_Put_keys:_10-10           1000000000               0.0000006 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_Map_Uint8_Get_keys:_10-10                   1000000000               0.0000006 ns/op               0 B/op          0 allocs/op
BenchmarkNew/SliceTree_Map_Uint8_int_Get_keys:_10-10            1000000000               0.0000007 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Map_Uint8_int_Get_keys:_10-10           1000000000               0.0000005 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_Map_Uint8_Put_keys:_100-10                  1000000000               0.0000020 ns/op               0 B/op          0 allocs/op
BenchmarkNew/SliceTree_Map_Uint8_int_Put_keys:_100-10           1000000000               0.0000022 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Map_Uint8_int_Put_keys:_100-10          1000000000               0.0000032 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_Map_Uint8_Get_keys:_100-10                  1000000000               0.0000020 ns/op               0 B/op          0 allocs/op
BenchmarkNew/SliceTree_Map_Uint8_int_Get_keys:_100-10           1000000000               0.0000023 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Map_Uint8_int_Get_keys:_100-10          1000000000               0.0000018 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_Map_Uint8_Put_keys:_255-10                  1000000000               0.0000052 ns/op               0 B/op          0 allocs/op
BenchmarkNew/SliceTree_Map_Uint8_int_Put_keys:_255-10           1000000000               0.0000066 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Map_Uint8_int_Put_keys:_255-10          1000000000               0.0000078 ns/op               0 B/op          0 allocs/op
BenchmarkNew/Native_Map_Uint8_Get_keys:_255-10                  1000000000               0.0000048 ns/op               0 B/op          0 allocs/op
BenchmarkNew/SliceTree_Map_Uint8_int_Get_keys:_255-10           1000000000               0.0000079 ns/op               0 B/op          0 allocs/op
BenchmarkNew/CenterTree_Map_Uint8_int_Get_keys:_255-10          1000000000               0.0000053 ns/op               0 B/op          0 allocs/op

How to read the benchmark

So what do these numbers really tell us? Well nothing we didn't all ready know prior to the benchmark. The map feature of go trades memory for read and write speed, in particular on wirte. Usually platforms are more cpu constrained than memory constrained, but that isn't always the case. So we are looking at worst case reads for all omap based benhmarks and best case write for go's internal map and omap.CenterTree. In that aspect omap.CenterTree is a little more twice as fast best case over go's internal map for writes, but why? Read to the end of this file for details.. Benchmarks are always a zero sum game!

What version of go did you run this on?

  • 1.26

What setup did you use?

  • cpu: AMD Ryzen 9 9950X 16-Core Processor
  • mem: dd5 6k mt
  • set -cpu 10
  • VM: Container under windows in docker desktop: Linux d2f8535c6a45 6.6.87.2-microsoft-standard-WSL2 #1 SMP PREEMPT_DYNAMIC Thu Jun 5 18:30:46 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux

On write:

  • omap.CenterTree is faster ( when appending or prepending )
  • Most cases in smaller to medium data sets go's map is faster.. But not always..

On read:

  • Go map map is faster when it comes to integers on both read and write
  • omap.SliceTree and omap.CenterTree are faster when it comes to strings

Where the native map in go always perfoms worse is in memory usage:

  • Go map uses about 45%-70% more memory than omap.SliceTree or omap.CenterTree.

Scanning for any key between 2 strings:

  • Go map must be iterated over and each key must be compared o(n)
  • omap.SliceTree and omap.CenterTree are just o(log(n)+log(n))

So which is better for performance? For strings omap.SliceTree or omap.CenterTree, for int/float there is a use case for go's internal map but its benefits don't outweigh its losses.

Scanning all keys for every key is a known worst case.. so why include it? People do it, and most sorted map packages on pkg.go.dev will force you do do that at least until you reach the end of your range.

Why does the native go map read slow down so much after just 1600 strings?

Its complicated, but its a combination of memory bandwidth and the internals the go native map doing full scans due to a large number of collisions.

Why is SliceTree always slower on write?

Simple: O(log n) on lookup, has to be done prior to a write. Go's internal map skips the compare operation all until the 3rd tier. The very thing that gives omap.SliceTree its read speed at scale slows it down in write operations.

So which is better SliceTree CenterTree or a map?

When it comes to read performance of strings, omap.SliceTree and omap.CenterTree are always faster. The omap.Slicetree object is built entirly around being able to find a range of keys without scanning. The omap.CenterTree is optimized for pre-pending and appending data to the slice. The map provided by go is built around hashing. WHen it comes to a map of uint64, go's internal map has a slight edge in performance.

Why include memory in benchmarks?

This is a complex topic, but here is a short answer: Try turning memory benchmarks on for other sorted map pacakges on pkg.go.dev, they use orders of magnitued more memory than the native map in go. Most sorted map implementations arn't performacne competative with the native map in go. The omap.SliceTree/omap.CenterTree is at least competative with the native go map implementation. In spesific use cases omap.SliceTree and omap.CenterTree are signifigantly faster than the native map feature of go. An instance of omap.SliceTree or omap.CenterTree do all this while being an sorted map, that is no small feat.

Comapring go map o(1) and omap.SliceTee o(log(n))

Go map: o(1) How so? In truth go map uses the first 2 bytes of a hash as keys in a 2 tier tree, the remaing bytes then hit the o(1) or full scan. This is the sweet spot on most use cases. The side effect is, keys are never going to be ordered.

omap.Slicetree is o(log n)? The omap.SliceTree is a btree with os type int(32|64) as its limit, indexed by sequence order. An omap.SliceTree instance is never a full scan, but an Order First Search is always more expensive in smaller sets and always cheaper in larger sets. Effectivly omap.SliceTree is pure an Order First search with the root at the median of the array. This hits the sweet spot for range based lookups and massive data sets. The side effect is an ordered index of keys.

omap.CenterTree is way faster than go's map on these benchmarks.. how so? The omap.CenterTree is optimized for pushing data to the begin and end of the array. The array is pre-allocated and the data is stored in the middle of the array. Thus putting things before the fist value or after the last value are super cheap. When you write to the middle go's internal map should usually be faster, but not always.

Odd Quirks of indexing

So which is faster a 1 byte key using a map or omap.SliceTree?

  • On Write: omap.CenterTree is faster for append and prepend
  • On Read: go's map is slightly faster than omap.SliceTree

So which is faster a 2 byte key using a map in go with 65535 elements or omap.SliceTree?

  • at 2 bytes a normal map in go is faster

So when does omap.Slicetree actually become faster?

  • when a map in go would end up with colisions on the 3rd tier, this causes a full scan of that tier.
  • when does that happen? Usually after a few thousand unique strings, but it depends on which buckets they land.

Is omap.SliceTree or omap.CenterTree ever faster with ints or floats?

  • Yes on range scans

Why does omap.CenterTree have such good write perfomance?? is the 2x performance over go's internal map real?

  • Its a trap! In reality it omap.CenterTree is optimized for append and prepend.
  • if not appending or pre-pending there is roughly a 30-50% write performance penalty due to the O(log n) lookup tax

Documentation

Overview

The fastests sorted map possible for caching, time-series, and scheduling.

The omap.OrderedMap instances offer a high-performance thread-safe sorted map for Go. Optimized for O(log n) lookups and O(1) boundary inserts using pre-allocated circular slices. 2x faster for time-series and sequential data. The drivers of the design process was the creation of a very good scheduler that could also double as a ttl cache deprecation engine. Technically the omap package implements very minital btree using a slice. The btree implementation is ordered and does not allow for duplicates; The internals manage keys by splicing the internal slice. The side effect of this design results in what operates exactly like sorted map. Under spesific conditions or very large data sets, omap.SliceTree is faster on "Get" operations than the built in go map. An omap.SliceTree instance uses signifigantly less the memory than the map feature in go.

Unlike tree-based maps, omap.SliceTree and omap.CenterTree range searches use direct slice referencing, avoiding tree traversal entirely. This is in general the optimized solution for caching, and time-series maps.

Performance Matters

Performance objectives while maintinaing a sorted map:

  • Lookups for both Put and Get operations are always a fixed complexity: o(log n).
  • All iteration operations are fixed cost of o(1).
  • Finding or removing elements between 2 points is always a fixed cost of o(log(n) + log(n)).
  • Finding elements: at, before, or after a given point is always a fixed cost of o(log n)
  • Mass Removal of unordered elements that may or may not exist has a maximum complexity of o(log(n) + log(k) + k)
  • Pre-emptive but predictable growth, this is done by setting the Growth size.
  • omap.SliceTree and omap.CenterTree support tunable pre-allocation

The omap package provides a common interface OrderedMap implemented by the following:

Basic Example:

kv:=omap.NewCenterTree[string,string](2,cmp.Compare)
// Save a value
kv.Put("Hello"," ")
kv.Put("World","!\n")

// Itertor
for k,v :=range kv.All {
	fmt.Printf("%s%s",k,v)
}

The resulting output will be:

"Hello World!\n"

We can now make things a bit smaller by removing things by a range.

kv.RemoveBetween("Sell","Zoo")

// Itertor
for k,v :=range kv.All {
	fmt.Printf("%s%s\n",k,v)
}

The resulting output will now be:

"Hello \n"

The index lookup creates 2 values for each potential key:

  • Array postion, example: 0
  • Offset can be any of the following: -1,0,1

Since lookups create both an index position and offsett, it becomes possible to look for the following:

  • Elements before the array
  • Positions between elements of the array
  • Elements after the array
  • Elements to overwrite

Index

Constants

View Source
const (
	FIRST_KEY = 1
	LAST_KEY  = 2
)
View Source
const MIN_GROWTH = 1

Variables

This section is empty.

Functions

func GetIndex

func GetIndex[K any, V any](k K, Cmp func(a, b K) int, Slices []KvSet[K, V]) (index, offset int)

Returns the index and offset of a given key.

The index is the current relative postion in the slice.

The offset represents where the item would be placed:

  • offset of 0, at index value.
  • offset of 1, expand the slice after the inddex and put the value to the right of the index
  • offset of -1, expand the slice before the index and put the value to left of the current postion

Complexity: o(log n)

func KvIter

func KvIter[K any, V any](set []KvSet[K, V]) iter.Seq2[K, V]

func Merge

func Merge[K comparable, V any](dst OrderedMap[K, V], src map[K]V) int

Merges a map into an OrderedMap instance.

func ToMap

func ToMap[K comparable, V any](src OrderedMap[K, V]) map[K]V

Utility method, to convert from an OrderedMap instance to a regular map. Due to constraints placed on maps in go, this feature is implemented as a function, not a method.

func TsKvIterWrapper

func TsKvIterWrapper[K any, V any](seq iter.Seq2[K, V]) iter.Seq2[K, V]

Takes an existing K,V iterator and returns a thread safe version.

Types

type CenterTree added in v1.1.20

type CenterTree[K any, V any] struct {
	*SliceTree[K, V]
	CenteredSlice []KvSet[K, V]
	Begin         int
	End           int
}

func NewCenterTree added in v1.1.20

func NewCenterTree[K any, V any](growth int, cmp func(a, b K) int) *CenterTree[K, V]

func (*CenterTree[K, V]) FastMerge added in v1.1.36

func (s *CenterTree[K, V]) FastMerge(set OrderedMap[K, V]) int

func (*CenterTree[K, V]) Filter added in v1.1.34

func (s *CenterTree[K, V]) Filter(cb func(K, V) bool)

Deletes all elements that return true

func (*CenterTree[K, V]) FilterBetween added in v1.1.36

func (s *CenterTree[K, V]) FilterBetween(cb func(k K, v V) bool, a, b K, opt ...int)

func (*CenterTree[K, V]) MassRemove added in v1.1.20

func (s *CenterTree[K, V]) MassRemove(keys ...K) (total int)

MassRemove implemenets OrderedMap

func (*CenterTree[K, V]) MassRemoveKV added in v1.1.20

func (s *CenterTree[K, V]) MassRemoveKV(keys ...K) iter.Seq2[K, V]

MassRemoveKV implemenets OrderedMap

func (*CenterTree[K, V]) Merge added in v1.1.20

func (s *CenterTree[K, V]) Merge(set OrderedMap[K, V]) int

Merge implements OrderedMap

func (*CenterTree[K, V]) Put added in v1.1.20

func (s *CenterTree[K, V]) Put(k K, v V)

Put implements OrderedMap

func (CenterTree[K, V]) Remove added in v1.1.20

func (s CenterTree[K, V]) Remove(key K) (value V, ok bool)

Remove implemenets OrderedMap

func (*CenterTree[K, V]) RemoveAll added in v1.1.20

func (s *CenterTree[K, V]) RemoveAll() (size int)

RemoveAll implemenets OrderedMap

func (*CenterTree[K, V]) RemoveBetween added in v1.1.20

func (s *CenterTree[K, V]) RemoveBetween(a, b K, opt ...int) (total int)

RemoveBetween implements OrderedMap

func (*CenterTree[K, V]) RemoveBetweenKV added in v1.1.20

func (s *CenterTree[K, V]) RemoveBetweenKV(a, b K, opt ...int) (removed iter.Seq2[K, V])

RemoveBetween implements OrderedMap

func (*CenterTree[K, V]) SetGrowth added in v1.1.20

func (s *CenterTree[K, V]) SetGrowth(grow int)

SetGrowth implements OrderedMap

func (*CenterTree[K, V]) ToTs added in v1.1.20

func (s *CenterTree[K, V]) ToTs() OrderedMap[K, V]

type KvSet

type KvSet[K any, V any] struct {
	Key   K
	Value V
}

func MergeKvSet added in v1.1.36

func MergeKvSet[K any, V any](left, right, dst []KvSet[K, V], start, finish int, Cmp func(a, b K) int, OnOverwrite func(K, V, V)) (res []KvSet[K, V], end int)

type OrderedMap

type OrderedMap[K any, V any] interface {
	// Should return an iterator for all key/value pairs
	All() iter.Seq2[K, V]

	// Returns all keys, the int just expected to be a sequential number
	Keys() iter.Seq[K]

	// Returns all the values, the int is expected to be a sequential number
	Values() iter.Seq[V]

	// Returns true if the key exists, false if it does not.
	Exists(key K) bool

	// Returns true if the key is between the first and last key
	Contains(key K) bool

	// Sets the key to the value, returns the index id.
	Put(key K, value V)

	// Tries to get the value using the given key.
	// If the key exists, found is set to true, if the key does not exists then found is set to false.
	Get(key K) (value V, found bool)

	// Tries to remove the given key, returns value is set if ok is true.
	Remove(key K) (value V, ok bool)

	// Removes all elements.
	// Returns how many element were removed.
	RemoveAll() int

	// Attempts to remove all keys, returns the number of keys removed.
	MassRemove(keys ...K) (total int)

	// Attempts to remove all keys, returns an iterator with the key/value pairs
	MassRemoveKV(keys ...K) iter.Seq2[K, V]

	// Returns the current number of key/value pairs.
	Size() int

	// If ok is true, returns the first key.
	FirstKey() (key K, ok bool)

	// If ok is true, returns the last key.
	LastKey() (key K, ok bool)

	// Returns total number of elements between a and b.
	// Neither a or b are required to exist.
	//
	// The the optional opt argument:
	//  - When: opt[0]==omap.FIRST_KEY, a is ignored and the FirstKey is used.
	//  - When: opt[0]==omap.LAST_KEY, b is ignored and the LastKey is used.
	//  - To Ignore both a and b, set: opt[0]==omap.FIRST_KEY|omap.LAST_KEY.
	Between(a, b K, opt ...int) (total int)

	// Returns an iterator that contains the key value sets between a and b.
	// Neither a or b are required to exist.
	//
	// The the optional opt argument:
	//  - When: opt[0]==omap.FIRST_KEY, a is ignored and the FirstKey is used.
	//  - When: opt[0]==omap.LAST_KEY, b is ignored and the LastKey is used.
	//  - To Ignore both a and b, set: opt[0]==omap.FIRST_KEY|omap.LAST_KEY.
	BetweenKV(a, b K, opt ...int) (seq iter.Seq2[K, V])

	// Trys to delete the elements between a and b, returns the total number of elements deleted.
	// Neither a or b are required to exist.
	//
	// The the optional opt argument:
	//  - When: opt[0]==omap.FIRST_KEY, a is ignored and the FirstKey is used.
	//  - When: opt[0]==omap.LAST_KEY, b is ignored and the LastKey is used.
	//  - To Ignore both a and b, set: opt[0]==omap.FIRST_KEY|omap.LAST_KEY.
	RemoveBetween(a, b K, opt ...int) (total int)

	// Trys to delete the elements between a and b, returns an iterator that contains removed K,V pairs.
	// Neither a or b are required to exist.
	//
	// The the optional opt argument:
	RemoveBetweenKV(a, b K, opt ...int) (seq iter.Seq2[K, V])

	// Should return true if the instance is thread safe, fakse if not.
	ThreadSafe() bool

	// Merges a given [OrderedMap] into this one.
	// Returns the number of keys added
	Merge(set OrderedMap[K, V]) int

	// Sets the overwrite notice.  Its good to know when things change!
	SetOverwrite(cb func(key K, oldValue, newValue V))

	// Sets the growth value.
	SetGrowth(grow int)

	// Returns a thread safe instance.
	// If the instance is all ready thread safe, then the current instance is returned.
	ToTs() OrderedMap[K, V]

	// Deletes the given element when the callback returns true
	Filter(func(k K, v V) bool)

	// Like filter but only runs between elemetns a and b
	FilterBetween(cb func(k K, v V) bool, a, b K, args ...int)

	// Returns the current full Array
	GetKvSlice() []KvSet[K, V]

	// Fast merge operation always o(1).
	// This requires both this instance and the set are in the same order.
	FastMerge(set OrderedMap[K, V]) int

	// Returns a slice containing the keys and values between a and b
	GetBetweenKvSlice(a, b K, opt ...int) []KvSet[K, V]
}

func NewTs

func NewTs[K any, V any](Cmp func(a, b K) int) (Map OrderedMap[K, V])

Creates a new thread safe OrderedMap instance.

type SliceTree

type SliceTree[K any, V any] struct {

	// Internally managed keys slice
	Slices []KvSet[K, V]

	// Compare function.
	Cmp func(a, b K) int

	// Required non 0 value, determins by what capacity we grow the internal
	// Slice.  Default is 1.
	Growth int

	// Required non nil value, called when ever a value is overwritten.
	// Seting this funtion saves on having to write a check when data is overwritten.
	OnOverWrite func(key K, oldValue V, newValue V)
}

func New

func New[K any, V any](cb func(a, b K) int) *SliceTree[K, V]

Creates a new SliceTee with the default Slices size of 100. If you require more control over the starting size of the slice use the NewSliceTree function in stead.

func NewFromMap

func NewFromMap[K comparable, V any](m map[K]V, cb func(a, b K) int) *SliceTree[K, V]

func NewSliceTree

func NewSliceTree[K any, V any](size int, cb func(a, b K) int) *SliceTree[K, V]

Creatss a new SliceTree with the internal Slice set to "size".

func (*SliceTree[K, V]) All

func (s *SliceTree[K, V]) All() iter.Seq2[K, V]

Returns an iterator for key/value pars. The internals of this iterator do not lock the tree or prevent updates. You can safely call an iterator from with an iterator. and not run into deadlocks.

func (*SliceTree[K, V]) Between

func (s *SliceTree[K, V]) Between(a, b K, opt ...int) (total int)

Between implements OrderedMap

func (*SliceTree[K, V]) BetweenKV

func (s *SliceTree[K, V]) BetweenKV(a, b K, opt ...int) (seq iter.Seq2[K, V])

BetweenKV implements OrderedMap

func (*SliceTree[K, V]) Contains

func (s *SliceTree[K, V]) Contains(key K) bool

func (*SliceTree[K, V]) Exists

func (s *SliceTree[K, V]) Exists(k K) bool

Returns true if the k exists in the slcie. Complexity: o(log n)

func (*SliceTree[K, V]) FastMerge added in v1.1.36

func (s *SliceTree[K, V]) FastMerge(set OrderedMap[K, V]) int

func (*SliceTree[K, V]) Filter added in v1.1.34

func (s *SliceTree[K, V]) Filter(cb func(K, V) bool)

Deletes the given element when the callback returns true

func (*SliceTree[K, V]) FilterBetween added in v1.1.36

func (s *SliceTree[K, V]) FilterBetween(cb func(K, V) bool, a, b K, opt ...int)

func (*SliceTree[K, V]) FirstKey

func (s *SliceTree[K, V]) FirstKey() (key K, ok bool)

GetFirstKey implements OrderedMap

func (*SliceTree[K, V]) Get

func (s *SliceTree[K, V]) Get(k K) (value V, found bool)

Tries to fetch value based on key of k, if k does not exist, found is false.

func (*SliceTree[K, V]) GetBetweenKvSlice added in v1.1.39

func (s *SliceTree[K, V]) GetBetweenKvSlice(a, b K, opt ...int) []KvSet[K, V]

Returns a slice containing the elements between a and b

func (*SliceTree[K, V]) GetKvSlice added in v1.1.36

func (s *SliceTree[K, V]) GetKvSlice() []KvSet[K, V]

func (*SliceTree[K, V]) Keys

func (s *SliceTree[K, V]) Keys() iter.Seq[K]

Returns an iterator for the current keys. The internals of this iterator do not lock the tree or prevent updates. You can safely call an iterator from with an iterator. and not run into deadlocks.

func (*SliceTree[K, V]) LastKey

func (s *SliceTree[K, V]) LastKey() (key K, ok bool)

GetLastKey implements OrderedMap

func (*SliceTree[K, V]) MassRemove

func (s *SliceTree[K, V]) MassRemove(args ...K) (total int)

Attempts to remove the keys from the tree in bulk. Returns the number of keys removed.

This is almost always faster than just looping over a list of keys and calling Remove one key at a time. The internals of The MassRemove method deletes elements in squential contiguys blocks: reducing on the number of internal splice operations.

Complexity:

Worst case shown (Per key removed or k): o(log(n) + o(log k) + 2*k).

In truth, the real world complexity is drastically reduced by the following:

  • Deletetion of duplicate keys.
  • Keys being deleted do not exist.

The complexity is defined by the steps required:

  • Index lookups: o(log n) +k
  • child index is created to de-duplicate and order keys for deletion: o(log k)
  • key deletion is done in contiguous blocks: k

func (*SliceTree[K, V]) MassRemoveKV

func (s *SliceTree[K, V]) MassRemoveKV(args ...K) iter.Seq2[K, V]

func (*SliceTree[K, V]) Merge

func (s *SliceTree[K, V]) Merge(set OrderedMap[K, V]) int

Merge implements OrderedMap

func (*SliceTree[K, V]) Put

func (s *SliceTree[K, V]) Put(k K, v V)

Sets the key/vale pair and returns the index id. Comlexity: o(log n)

func (*SliceTree[K, V]) Remove

func (s *SliceTree[K, V]) Remove(k K) (value V, ok bool)

Tries to remove the element of k, returns false if it fails. Complexity: o(log n)

func (*SliceTree[K, V]) RemoveAll

func (s *SliceTree[K, V]) RemoveAll() int

Removes all elements in the slice, but keeps the memory allocated.

func (*SliceTree[K, V]) RemoveBetween

func (s *SliceTree[K, V]) RemoveBetween(a, b K, opt ...int) (total int)

RemoveBetween implements OrderedMap

func (*SliceTree[K, V]) RemoveBetweenKV

func (s *SliceTree[K, V]) RemoveBetweenKV(a, b K, opt ...int) (removed iter.Seq2[K, V])

RemoveBetween implements OrderedMap

func (*SliceTree[K, V]) Set

func (s *SliceTree[K, V]) Set(index int, v V) (status bool)

Sets the value in the index to v. The last index value returned from Put to update the last index point. This lets you bypass the o(log n) update complexity for writing to the same element over and over again. The internals still call s.OnOverWrite for you.

func (*SliceTree[K, V]) SetGrowth

func (s *SliceTree[K, V]) SetGrowth(grow int)

SetGrowth implements OrderedMap

func (*SliceTree[K, V]) SetIndex

func (s *SliceTree[K, V]) SetIndex(idx, offset int, k K, v V) (index int)

Sets the given k,v pair based on the index and offset provided by a call to GetIndex. Returns the resulting array index id.

Using a combinaiton of GetIndex and SetIndex lets you bypass the o(log n) comlexity when wiring to the same node over and over again. The value reutrned from Put can be used to update the internals using SetIndex with the offset being 0.

func (*SliceTree[K, V]) SetOverwrite

func (s *SliceTree[K, V]) SetOverwrite(cb func(key K, oldValue, newValue V))

Sets the internal OnOverWrite function.

func (*SliceTree[K, V]) Size

func (s *SliceTree[K, V]) Size() int

Returns the total number key/value pairs in the slice.

func (*SliceTree[K, V]) ThreadSafe

func (s *SliceTree[K, V]) ThreadSafe() bool

Returns false.

func (*SliceTree[K, V]) ToTs

func (s *SliceTree[K, V]) ToTs() OrderedMap[K, V]

Returns a thread safe instnace from the current instance.

func (*SliceTree[K, V]) UnSafeMassRemove

func (s *SliceTree[K, V]) UnSafeMassRemove(keys ...K)

This method is by defenition, unsafe, but fast.

Only use if the keys being removed meet all of the following requirements:

  • no duplicate keys
  • keys are in ascending ordered
  • all keys currently exist in the internals of the tree

Complexity: key=keys; o(log n +k)

func (*SliceTree[K, V]) Values

func (s *SliceTree[K, V]) Values() iter.Seq[V]

Returns an iterator for the current values The internals of this iterator do not lock the tree or prevent updates. You can safely call an iterator from with an iterator. and not run into deadlocks.

type ThreadSafeOrderedMap

type ThreadSafeOrderedMap[K any, V any] struct {
	// Instance to wrap for locking
	Tree OrderedMap[K, V]
	// contains filtered or unexported fields
}

func (*ThreadSafeOrderedMap[K, V]) All

func (s *ThreadSafeOrderedMap[K, V]) All() iter.Seq2[K, V]

All implements OrderedMap.

func (*ThreadSafeOrderedMap[K, V]) Between

func (s *ThreadSafeOrderedMap[K, V]) Between(a, b K, opt ...int) (total int)

Between implements OrderedMap

func (*ThreadSafeOrderedMap[K, V]) BetweenKV

func (s *ThreadSafeOrderedMap[K, V]) BetweenKV(a, b K, opt ...int) (seq iter.Seq2[K, V])

BetweenKV implements OrderedMap

func (*ThreadSafeOrderedMap[K, V]) Contains

func (s *ThreadSafeOrderedMap[K, V]) Contains(key K) bool

func (*ThreadSafeOrderedMap[K, V]) Exists

func (s *ThreadSafeOrderedMap[K, V]) Exists(key K) bool

Exists implements OrderedMap.

func (*ThreadSafeOrderedMap[K, V]) FastMerge added in v1.1.36

func (s *ThreadSafeOrderedMap[K, V]) FastMerge(set OrderedMap[K, V]) int

func (*ThreadSafeOrderedMap[K, V]) Filter added in v1.1.34

func (s *ThreadSafeOrderedMap[K, V]) Filter(cb func(K, V) bool)

func (*ThreadSafeOrderedMap[K, V]) FilterBetween added in v1.1.36

func (s *ThreadSafeOrderedMap[K, V]) FilterBetween(cb func(k K, v V) bool, a, b K, opt ...int)

func (*ThreadSafeOrderedMap[K, V]) FirstKey

func (s *ThreadSafeOrderedMap[K, V]) FirstKey() (key K, ok bool)

GetFirstKey implements OrderedMap

func (*ThreadSafeOrderedMap[K, V]) Get

func (s *ThreadSafeOrderedMap[K, V]) Get(key K) (value V, found bool)

Get implements OrderedMap.

func (*ThreadSafeOrderedMap[K, V]) GetBetweenKvSlice added in v1.1.39

func (s *ThreadSafeOrderedMap[K, V]) GetBetweenKvSlice(a, b K, opt ...int) []KvSet[K, V]

func (*ThreadSafeOrderedMap[K, V]) GetKvSlice added in v1.1.36

func (s *ThreadSafeOrderedMap[K, V]) GetKvSlice() []KvSet[K, V]

func (*ThreadSafeOrderedMap[K, V]) Keys

func (s *ThreadSafeOrderedMap[K, V]) Keys() iter.Seq[K]

Keys implements OrderedMap.

func (*ThreadSafeOrderedMap[K, V]) LastKey

func (s *ThreadSafeOrderedMap[K, V]) LastKey() (key K, ok bool)

GetLastKey implements OrderedMap

func (*ThreadSafeOrderedMap[K, V]) MassRemove

func (s *ThreadSafeOrderedMap[K, V]) MassRemove(keys ...K) (total int)

MassRemove implements OrderedMap.

func (*ThreadSafeOrderedMap[K, V]) MassRemoveKV

func (s *ThreadSafeOrderedMap[K, V]) MassRemoveKV(keys ...K) iter.Seq2[K, V]

func (*ThreadSafeOrderedMap[K, V]) Merge

func (s *ThreadSafeOrderedMap[K, V]) Merge(set OrderedMap[K, V]) int

Merge implements OrderedMap

func (*ThreadSafeOrderedMap[K, V]) Put

func (s *ThreadSafeOrderedMap[K, V]) Put(key K, value V)

Put implements OrderedMap.

func (*ThreadSafeOrderedMap[K, V]) Remove

func (s *ThreadSafeOrderedMap[K, V]) Remove(key K) (V, bool)

Remove implements OrderedMap.

func (*ThreadSafeOrderedMap[K, V]) RemoveAll

func (s *ThreadSafeOrderedMap[K, V]) RemoveAll() int

RemoveAll implements OrderedMap.

func (*ThreadSafeOrderedMap[K, V]) RemoveBetween

func (s *ThreadSafeOrderedMap[K, V]) RemoveBetween(a, b K, opt ...int) (total int)

RemoveBetween implements OrderedMap.

func (*ThreadSafeOrderedMap[K, V]) RemoveBetweenKV

func (s *ThreadSafeOrderedMap[K, V]) RemoveBetweenKV(a, b K, opt ...int) (seq iter.Seq2[K, V])

RemoveBetweenKV implements OrderedMap.

func (*ThreadSafeOrderedMap[K, V]) SetGrowth

func (s *ThreadSafeOrderedMap[K, V]) SetGrowth(grow int)

func (*ThreadSafeOrderedMap[K, V]) SetOverwrite

func (s *ThreadSafeOrderedMap[K, V]) SetOverwrite(cb func(key K, oldValue, newValue V))

SetOverwrite implements OrderedMap

func (*ThreadSafeOrderedMap[K, V]) Size

func (s *ThreadSafeOrderedMap[K, V]) Size() int

Size implements OrderedMap.

func (*ThreadSafeOrderedMap[K, V]) ThreadSafe

func (s *ThreadSafeOrderedMap[K, V]) ThreadSafe() bool

Always returns true.

func (*ThreadSafeOrderedMap[K, V]) ToTs

func (s *ThreadSafeOrderedMap[K, V]) ToTs() OrderedMap[K, V]

Always returns this instance.

func (*ThreadSafeOrderedMap[K, V]) Values

func (s *ThreadSafeOrderedMap[K, V]) Values() iter.Seq[V]

Values implements OrderedMap.

Directories

Path Synopsis
examples
Iterators command
hello command

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL