LinkedHashMap in Java

What is LinkedHashMap

A LinkedHashMap like HashMap is a simple yet powerful way to store and get data.
Unlike HashMap, LinkedHashMap is based on HashTable and Linked list implementation of the Map interface and stores items as key/value pairs.
Like HashMap, LinkedHashMap permits only unique keys. It also permits only one null key (whereas HashTable does not allow any null key) but may have more than one null values.
Unlike HashMap, it does make garuntee of the order of the elements stored in it.
All operations happen in LinkedHashMap(in HashMap also) are unsynchronized whereas, in HashTable all operations are synchronized.

The LinkedHashMap implementation maintains a doubly-linked list running through all of its entries. This linked list defines the iteration ordering, which is normally the order in which keys were inserted into the map (insertion-order) and this insertion order is not affected if a key is re-inserted into the map.

A special constructor as shown below is provided to create a LinkedHashMap whose order of iteration is order in which its entries were last accessed, from least-recently to most recently (access-order).

public LinkedHashMap(int initialCapacity,
                     float loadFactor,
                     boolean accessOrder)

Constructs an empty LinkedHashMap instance with the specified initial capacity, load factor and ordering mode.

initialCapacity – the initial capacity
loadFactor – the load factor
accessOrder – the ordering mode – true for access-order, false for insertion-order

Like HashMap, LinkedHashMap provides constant time performance i.e., O(1), for two basic operations get and put provided hash function disperses elements properly among the buckets.

Two parameters that affect the performance of the LinkedHashMap instance: capacity and load factor.

The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created.

The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed, i.e., internal data structures are rebuilt, so that the hash table has approximately twice the number of buckets.

Iteration over LinkedHashMap requires time propertional to the capacity (the number of buckets) of the LinkedHashMap plus its size (the number of key/value pair mappings). Therefore, it is highly recommended not to set the initial capacity too high (or load factor too low) if iteration performance is important.

The expected number of entries in the map and its load factor should be taken into account when setting its initial capacity, so as to minimize the number of rehash operations.

As a general thumb of rule, the default load factor (.75) offers a good tradeoff between time and space costs.
Accessing in multi-threaded environment

Like HashMap, LinkedHashMap implementation is not synchronized. So multiple threads access a hash map concurrently, and at least one of the threads modifies the map structurally, it must be synchronized externally. This is typically accomplished by synchronizing on some object that naturally encapsulates the map. If no such object exists, the map should be “wrapped” using the Collections.synchronizedMap method. This is best done at creation time, to prevent accidental unsynchronized access to the map:

Map m = Collections.synchronizedMap(new LinkedHashMap(...));

For multi-threaded environment you can use ConcurrentHashMap instead of Synchronized LinkedHashMap but if you have same number of reader and writer threads then ConcurrentHashMap and Synchronized LinkedHashMap will give the same performance. Therefore ConcurrentHashMap will give you best performance when you have many reader threads and very few writer threads.

More information could be found at
LinkedHashMap Fail-fast

If the map is structurally modified at any time after the iterator is created, in any way except through the iterator’s own remove method, the iterator will throw a ConcurrentModificationException.  Thus, in the face of concurrent modification, the iterator fails quickly and cleanly, rather than risking arbitrary, non-deterministic behavior at an undetermined time in the future.

The fail-fast behavior of an iterator cannot be guaranteed and iterators throw ConcurrentModificationException on a best-effort basis. Therefore, it would be wrong to write a program that depended on this exception for its correctness: the fail-fast behavior of iterators should be used only to detect bugs.

Set keys = linkedHashMap.keySet();
for (Object key : keys) {
    linkedHashMap.put(someObject, someValue); //it will throw the ConcurrentModificationException here


Set keys = linkedHashMap.keySet();
for (Object key : keys) {
    linkedHashMap.remove(key); //it will throw the ConcurrentModificationException here


package com.roytuts.collections;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.Map.Entry;
public class LinkedHashMapExample {
    public static void main(String[] args) {
        Map<String, String> llMap = new HashMap<>();
        llMap.put("RedHat", "Unix");
        llMap.put("Google", "Android");
        llMap.put("Apple", "iOS");
        llMap.put("Microsoft", "Windows");
        // iterate over HashMap
        llMap.forEach((k, v) -> System.out.println(k + " => " + v));
        // iterate using iterator
        Iterator<Entry<String, String>> iterator = llMap.entrySet().iterator();
        while (iterator.hasNext()) {
            Map.Entry<java.lang.String, java.lang.String> entry = (Map.Entry<java.lang.String, java.lang.String>) iterator
            System.out.println(entry.getKey() + " => " + entry.getValue());


RedHat => Unix
Google => Android
Apple => iOS
Microsoft => Windows
RedHat => Unix
Google => Android
Apple => iOS
Microsoft => Windows

Thanks for reading.

Leave a Reply

Your email address will not be published. Required fields are marked *