Implementing a proxy server on the integration bus

The prerequisites for creating proxy-cache at the ESB level are described, as well as the reasons for switching from one version to another. After the implementation of the decision in one of the major banks, it was abandoned, and at the moment its fate is not fully known. The article aims to share the way of thinking and opportunities that the proposed solution provides.

Context


The application business process queues the request_queue Tibco EMS request in canonical format:

<request><service>service_name</service><client>client_id</client><requestData>request_data</requestData ><replyTo>response_queue</replyTo></request>

  • service_name - the name of the service implemented by one of the IP
  • client_id - identifier of the client for which the call occurs
  • request_data - body of the request to the service
  • response_queue - queue name for an asynchronous response

The Tibco BW integration process parses the service_name and redirects the request to the appropriate service. The service response is placed in the response_queue queue , from where it is retrieved by the application process:

<response><service>service_name</service><client>client_id</client><responseData>response_data</responseData ></response>

  • response_data - response body

Some services respond for a few seconds, some for tens of seconds. This is too long for the application process. Especially when you consider that the same request can be repeated by the same or different application process.

These are logical prerequisites for creating proxy-cache.

First version and prerequisites


The proxy option has been implemented. He had a number of features that became prerequisites for further improvements:

1: Business Data Storage


The response of each service was stored in a separate database table as business data. So the list of client identifiers in different systems was stored in a table with columns:

  • customer
  • system
  • id
  • update date

2: DB-level logic


The response of each service had its own service logic at the database level. So client identifiers were parsed from the XML response, supplied with the update date and stored in the database.

3: Access to the database from application processes


The logic of applied processes was associated with the structure of the database. So the client identifier in the system was obtained by accessing the corresponding field of the table.

4: Additional functionality of application processes


The application process logic was loaded with cache relevance support functionality. So, in the absence of an actual value in the database, the process initiated a request to the corresponding service and updated the values ​​in the database.

5: Changes at several levels


With the need to cache new business data, changes were required at several levels. So adding a new attribute to the client identifier in the system required:

  • add a new field to the table
  • XML parsing logic changes
  • specifying the name of a new column in related application processes

Tasks


From the assumptions a number of tasks arose to refine the solution:

1: Simplification


The existing mechanism needed to be simplified to facilitate support and reduce the number of potential errors.

2: Transparency when connecting a new service


It was necessary to make simple and understandable the steps to cache a new service or modify an existing one.

3: Reducing connectivity


It seemed attractive to separate proxy into a separate layer in order to further optimize it independently, without changing application processes. And also, reduce the proxy connection with the database, as a tool for storing the cache, for its possible replacement.

Decision


The new proxy version was implemented as a separate service with its inbound queue. The application process queues the request_queue request to the proxy:

<request><service>proxy</service><client>client_id</client><requestData>
        service_name
        request_data
        live_time
    </requestData ><replyTo>response_queue</replyTo></request>

  • live_time - response time from service_name service

Based on the parameters service_name and client_id, the proxy searches the cache for the current response from the service_name service for the client client_id . If there is an answer, it is sent to the response_queue queue as a response from the service_name service . If there is no answer, a request to the service_name service is generated :

<request><service>service_name</service><client>client_id</client><requestData>request_data</requestData ><replyTo>proxy_response_queue</replyTo></request>

  • proxy_response_queue - incoming proxy queue for service responses

When a response arrives in the proxy_response_queue queue , it is stored in the cache. Then the proxy response is placed in the response_queue queue :
<response><service>service_name</service><client>client_id</client><responseData>response_data</responseData ></response>

It should be noted that in the cache, service responses are stored entirely as XML.

Profit


The advantages of the solution, which recoup the development costs, are:

1: ease of implementation


The mechanism has no logic at the database level and contains a limited set of tables. The list and table structure are independent of cached services.

2: Application offloading


Application processes request proxies and receive an up-to-date response. The proxy mechanism itself determines the availability of an actual response in the cache and, if necessary, generates a request to the service, receives and saves the response.

3: encapsulation


The proxy functionality is encapsulated in a separate mechanism with fixed entry points. The cache (database) is replaceable and does not have direct accesses from application processes.

4: Versatility


Changing the structure of cached service responses and business data needs does not require proxy adjustments.

5: transparency


Caching a new response requires administrative proxy settings.

Thanks for attention!

Also popular now: