Blogs

How to implement the idempotent filter on CloudHub

Written by Coforge-Salesforce BU | Dec 11, 2018 6:30:00 PM

The Idempotent Filter is an important message filter in the Mule platform,  ensuring that only unique messages are received by a service by checking the unique ID of the incoming message.

By default, Anypoint Platform CloudHub does not provide a transparent distributed implementation scheme. The typical filter on a Mule flow works fine when deployed on a Mule standalone cluster, but when the same application is deployed on the CloudHub workers, only the publisher node can consume the original payload from the object store, while the others fail to retrieve the cached content, resulting in storing the same key again in the object store. Each worker is a dedicated instance of Mule that runs your integration application. Workers may have a different memory capacity and processing power depending on how you configure them at application level and can be scaled vertically and horizontally.

Because CloudHub workers are not clustered, when we scale them horizontally to improve for example the throughput, the ObjectStore used by the message filter will not be shared. This means that although the default in-memory ObjectStore is intended to be used across a cluster through hazelcast replication as the backend mechanism, it fails when deployed on CloudHub.

To prevent the repetition of the same messages, we can implement the Idempotent Filter. It is possible to implement a distributed idempotent filter mechanism by using the default CloudHub persistent ObjectStore (using ObjectStore V2), which is automatically registered under the _defaultUserObjectStore bean name on spring.

Configuring an ObjectStore for Filter


By default, Mule stores all cached responses in an InMemoryObjectStore. Below are the multiple ways of creating a caching strategy and defining a new object store if you want to customise the way Mule stores cached responses.

  1. custom-object-store: Custom object stores are useful when you want to use a custom persistence mechanism for your ObjectStore.
  2. In-memory: This stores the data inside system memory. The data stored with In-memory is non-persistent, which means in case of an application restart or crash, the cached data will be lost.
  3. Managed-store: This stores the data in a place defined by ListableObjectStore. The data stored with Managed-store is persistent, which means in case of an application restart or crash, the cached data will not be lost and it is per worker only.
  4. Simple-test-file-store: This stores the data in a file. The data stored with the Simple-test-file-store configuration is persistent, which means in case of an application restart or crash, the cached data will not be lost.

Configuring custom-object-store

Below is the reference example of how to add custom-object-store using the idempotent message filter:

  • Place the idempotent message component to the Mule flow
  • Select the object custom-object-store from the list of available stores
  • Select the CustomObjectStore Java class and provide the values for the entryTTL, expirationInterval and localObjectStore –
  • Code snippet

Define the Custom Object Store
Below is the example of how to implement custom-object-store by extending the Abstract Monitored Object Store.

The ObjectStore Strategy defined under the idempotent message filter provides the main functionality, which implements the cache through a custom ObjectStore using custom classes. You can see that three properties using the custom ObjectStore:

  • entryTTL: the time to live on objects, expressed in milliseconds
  • expirationInterval: the interval at which the cache entries are scanned for expired entries
  • localObjectStore: it references to the “_defaultUserObjectStore” under Object Store V1 or V2

Below are the java classes used for idempotent message filter custom object store:

The above steps will help you implement Idempotent Filter functionality for an application to be deployed on multiple workers on CloudHub.