Thing which seemed very Thingish inside you is quite different when it gets out into the open and has other people looking at it

Thursday, October 11, 2012

How to configure MT-Logging with WSO2 Products



Prerequisites

Carbon 4.0.0 and above product
WSO2 BAM 2.0.0 and above
Apache Server installed


Introduction 

Stratos MT-Logging architecture provides a logging frame work to send logs to BAM. Which opens wide variety of possibilities when it comes to monitoring logs. In my previous article I have explained  the architecture of Distributed Logging with WSO2 BAM, this tutorial explains how you can set up logging effectively for any WSO2 Product and how you can analyze and monitor logs effectively.


Architecture



Setting up Hadoop Server to host archived log files

Once the logs are sent to BAM, we analyse the logs daily and send them to a file system. For better performance for archive logs analytic, we send archive logs to HDFS file system. So we can analyze archive logs using map reduce task (big data, long term data analysis).

Please refer How to Configure Hadoop to see how we can configure a hadoop cluster, Once you have hadoop cluster you can give your hdfs information in summarizer-config.xml. So it will automatically analyse your daily logs and send them to HDFS file system

Summarizer Configuration for log archiving.

<cronExpression>0 0 1 ? * * *</cronExpression>
<tmpLogDirectory>/home/usr/temp/logs</tmpLogDirectory>
<hdfsConfig>hdfs://localhost:9000</hdfsConfig>
<archivedLogLocation>/stratos/archivedLogs/</archivedLogLocation>
<bamUserName>admin</bamUserName>
<bamPassword>admin</bamPassword>

cronExpression - The schedule time, that summarizer runs daily

hdfsConfig - hdfs file server intimation
archivedLogLocation - HDFS file patch which the archived logs should be saved

Setting up Log4jAppender - Server Side (AS/ESB/GREG/etc)

To publish log events to BAM, log4j appender should be configured in each server. In order to do that you need to add LogEvent to the root logger and configure the LogEvent credential accordingly.

Add LogEvent to the root logger in log4j
Go to Server_Home/repository/con -> log4j.properties and LOGEVENT to log4j root logger (or replace the following line)  
log4j.rootLogger=INFO, CARBON_CONSOLE, CARBON_LOGFILE, CARBON_MEMORY, CARBON_SYS_LOG,LOGEVENT

Add Data publishing URLs and credentials
Go to  Server_Home/repository/con -> log4j.properties. Modify LOGEVENT appender’s  LOGEVENT.url as BAM Server thrift URL,LOGEVENT.userName, .LOGEVENT.password           
                                                            
log4j.appender.LOGEVENT=org.wso2.carbon.logging.appender.LogEventAppender
log4j.appender.LOGEVENT.url=tcp://localhost:7611
log4j.appender.LOGEVENT.layout=org.wso2.carbon.utils.logging.TenantAwarePatternLayout
log4j.appender.LOGEVENT.columnList=%T,%S,%A,%d,%c,%p,%m,%H,%I,%Stacktrace
log4j.appender.LOGEVENT.userName=admin
log4j.appender.LOGEVENT.password=admin

Enabling the Log Viewer
When the log viewer is not enable to take logs from cassandra the default behaviour of the log viewer to take logs from the carbon memory. It will only display the most recent logs of the carbon server. To get persistence logs (logs which are coming from the current date) you need to enable isLogsFromCassandra true so that you can view persistance logs through the management console of any carbon server (ESB/DSS/AS etc) . And also you need to give the user credentials of the cassandra server as shown below.

Change Logging-Config.xml to View Logs from BAM.
Got to Server_Home/repository/con/etc-> Logging-config.xml

Enable isDataFromCassandra
<isDataFromCassandra>true</isDataFromCassandra>

Give cassandra url of BAM Server
<cassandraHost>localhost:9160</cassandraHost>

Give BAM Server user credentials to access Cassandra Server in BAM
<userName>admin</userName>
<password>admin</password>

Give hadoop hdfs  hosted url for the logs viewer

<archivedHost>hdfs://localhost:9000</archivedHost>
<archivedHDFSPath>/stratos/logs</archivedHDFSPath>



Setting up Logging Analyzer - WSO2 BAM Side

Setting up BAM
Bind IPs for cassandra {This is not logging related, this is just to bind an ip address to cassandra so that cassandra will not start in localhost}

Copy cassandra.yaml from {WSO2_BAM_HOME}/repository/components/features/
org.wso2.carbon.cassandra.server_4.0.1/conf/cassandra.yaml to repository/conf/etc. Change the
IP address (localhost) to the correct ip address of BAM of listen_address and rpc_address


Copy cassandra-component.xml from {WSO2_BAM_HOME}/repository/components/
features/org.wso2.carbon.cassandra.dataaccess_4.0.1/conf/cassandra-component.xml to
repository/conf/etc. Change the IP address (localhost) to the correct ip address of BAM of
192.168.4.148:9160



Installing Logging Summarizer
Download P2 Profile which will contain  Logging summarizer features. Install logging.summarizer feature through Management Console (Go to Configure -> Features and Click on Add Repository). Once you add the repository you will be redirected to a page which contains available features. Select bam summarizer feature and install it. 

Change the logging config.xml
<isDataFromCassandra>true</isDataFromCassandra>


Change log rotation paths, give the log directory as apache log rotation directory, and give BAM username password credentials

<publisherURL>tcp://localhost:7611</publisherURL>
<publisherUser>admin</publisherUser>
<publisherPassword>admin</publisherPassword>
<logDirectory>/home/usr/apache/logs/</logDirectory>



 <tmpLogDirectory>/home/usr/temp/logs</tmpLogDirectory>


Point BAM to external hdfs file server

ow the logging is configured in both publisher and receiver, and you can view your logs by log-in into 
In order to point the analyzers to the hdfs file system you need to update BAM_HOME/repository/conf/advanced/hive-site.xml to point it to your hdfs file system.


<property>


  <name>fs.default.name</name>
  <value>hdfs://localhost:9000</value>
</property>




Now the logging is configured in both publisher and receiver, and you can view your logs by log-in into management console. And view System Logs - This will view the current logs as well as archived logs taken from the apache server.  Logs are daily archived to the apache server through a cron job.

If you want to analyze logs using hive analytics, and display in dashboards you can use bam analytics tools and dashboard tool kits to customize Logging KPIs for system administration.

Wednesday, September 26, 2012

How Distributed Logging Works in WSO2 Stratos.

Why we need distributed Logging ???????


Stratos is a distributed clustered setup where we have several applications such as ESB Servers,Application Servers, Identity Servers, Governance Servers, Data Services Sever etc  deployed together to work with each other to serve as Platform as a Service. Each of these servers are deployed in a clustered environment, where there will be more than one node for a given server and depending on the need, there will be new nodes spawned dynamically inside this cluster. And all these servers are fronted through an Elastic load balancer and depending on the request the load balancer will send requests to a selected node in a round robin fashion.

What would you do when there is an error occurs in a deployment like above where there are 13 different types of servers running in production and each of these servers are clustered and load balanced across 50+ servers?. This would be a nightmare for the system administrators to log-in into each server and grepping for the logs to identify the exact caused of the error. This is why distributed application deployment's  needs to keep a centralized application logs. These centralize logs should also be kept in a high scalable data storage in an ordered manner with easy access.So that the users (administrators,developers) can easily accesses  logs, whenever something goes unexpected, with the least amount of filtering  to pinpoint the exact cause of the issue.

When designing a logging  system like above, there are several things you need to consider.
  1. Capturing the right information inside the LogEvent – You have to make sure all the information you need in order to monitor your logs is aggregated in the LogEvent. For example in a cloud deployment setup you have to make sure not only the basic log details(logger,date,log level) are not enough to  point a critical issue. You further needs tenant information (user/domain), Host information (to identify which node is sending what), Name of the server (from which server you are getting the log) etc. These information is very critical when it comes to analyzing and monitor logs in an efficient way.
  2. Send logs to a centralized system in a nonblocking asynchronous manner so that monitoring will not affect the performance of the applications.
  3. High availability and Scalability
  4. Security – Stratos can be deployed and hosted in public clouds therefore, its important to make sure the logging system is high secured.
  5. How to display system/application logs in an efficient way with filtering options along with log rotation.

Those are the 5 main aspects which were mainly concerned when designing the distributed logging architecture. Since Stratos support multitenancy we made sure that logs can be separated by tenants, services, and applications.

MT-Logging with WSO2 BAM 2.0

WSO2 BAM 2.0 provide a rich set of tool for aggregation, analyzing and presentation for large scale data sets and any monitoring scenario can be easily modeled according to the BAM architecture. We selected WSO2 BAM as the backbone of our logging architecture mainly because it provides high performance with non intrusiveness along with high scalability and security. Since those are the crucial factors essential for a distributed logging system WSO2 BAM became the idol candidate for MT-Logging architecture.

Publishing Logs to BAM  


We implemented a Log4JAppender to send LogEvents to bam. There we used BAM Data agents get Log Data across to BAM. BAM data agents send data using thrift protocol which gives us high performance message through put as well as it is non blocking and asynchronous. When publishing Log events to BAM we make sure the Data Stream is created per tenant, per server, per date. When the data stream is initialized there will be a unique column family created per tenant, per server per date and the logs will be stored in that column family in a predefine keyspace in cassandra cluster.



The Data stream defines the set of information which needs to be stored for a particular LogEvent and can be modeled into a Data Model.

Data Model which is used for Log Event.


{'name':'log. tenantId. applicationName.date','version':'1.0.0', 'nickName':'Logs', 'description':'Logging Event',
'metaData':[{'name':'clientType','type':'STRING'} ], 
'payloadData':[
   {'name':'tenantID','type':'STRING'},
   {'name':'serverName','type':'STRING'},
   {'name':'appName','type':'STRING'},
   {'name':'logTime','type':'LONG'},
   {'name':'priority','type':'STRING'},
   {'name':'message','type':'STRING'},
   {'name':'logger','type':'STRING'},
   {'name':'ip','type':'STRING'},
   {'name':'instance','type':'STRING'},
   {'name':'stacktrace','type':'STRING'}
 ] }


We extend org.apache.log4j.PatternLayout a in order to capture tenant information, server information and node information and wrap it with log4j LogEvent.

Log Rotation and Archiving


Once we send the log events to BAM the logs will be saved in a Cassandra cluster. WSO2 BAM provides a rich set of tools to create analytic and schedule task. Therefore, we used these hadoop task to rotate logs daily and archive them and store it in a secure environment. In order to do that we use a hive query which will run daily as a cron job. It will read Cassandra data store, retrieve all the column families per tenant per application and archive them in to gzip format.



The hive Query which is used to rotate logs daily

set logs_column_family = %s;
set file_path= %s;
drop table LogStats;
set mapred.output.compress=true;
set hive.exec.compress.output=true;
set mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec;
set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec;

CREATE EXTERNAL TABLE IF NOT EXISTS LogStats (key STRING,
payload_tenantID STRING,payload_serverName STRING,
payload_appName STRING,payload_message STRING,
payload_stacktrace STRING,
payload_logger STRING,
payload_priority STRING,payload_logTime BIGINT) 
STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler' 
WITH SERDEPROPERTIES ( "cassandra.host" = %s,
"cassandra.port" = %s,"cassandra.ks.name" = %s,
"cassandra.ks.username" = %s,"cassandra.ks.password" = %s,
"cassandra.cf.name" = ${hiveconf:logs_column_family},
"cassandra.columns.mapping" = 
":key,payload_tenantID,
payload_serverName,payload_appName,payload_message,
payload_stacktrace,payload_logger,payload_priority,
payload_logTime" );
INSERT OVERWRITE  DIRECTORY 'file:///${hiveconf:file_path}
select 
concat('TID[',payload_tenantID, ']\t',
'Server[',payload_serverName,']\t',
'Application[',payload_appName,']\t',
'Message[',payload_message,']\t',
'Stacktrace ',payload_stacktrace,'\t',
'Logger{',payload_logger,'}\t',
'Priority[',payload_priority,']\t'),
concat('LogTime[',
(from_unixtime(cast(payload_logTime/1000 as BIGINT),'yyyy-MM-dd HH:mm:ss.SSS' )),']\n') as LogTime from LogStats
ORDER BY LogTime

Once we archived the logs we will send these archived logs to HDFS file system. The archived logs can be further analysed using map-reduce jobs, for long term data analytics



Advantages of sending Logs to WSO2 BAM

  1. Asynchronous and None Blocking Data publishing
  2. Receives and Stores Log Events Cassandra Cluster which is high scalable and a big Data Repository
  3. Rich tools set for analytics
  4. Can be shared with CEP for real time Log Event analysis.
  5. Can provide Logging tool boxes and dashboards for system administrators using WSO2 BAM
  6. High performance and non-intrusiveness
  7. Big data analysis
    1. Daily log information analytic - Analyse cassandra data storage
    2. Long term log information - Analyse HDFS file system using map-reduce

Monitoring and Analyzing System Logs 

  • Using the Log Viewer
    Both application and system logs can be displayed using the management console of a given product. Simply log-in to Management console and under monitor there are two links 1. System logs which has system logs of the running server 2) Application Logs which has application level logs (this can be services/web applications) for a selected application. This makes it easy for users to filter logs by the application they develop monitor logs up to application level.
  • Dashboards and Reports
    System administrators can log-in to WSO2 BAM and create their own dashboards and reports, so the can monitor their logs according to their Key performance Indicators. For example if they want to monitor number of fatal errors occur per given month for a given node.
  • SMS Alerts and Emails
    Not just dashboards and Reports ... Combining WSO2 BAM with WSO2 CEP you can get real time alerts like trigger emails, SMS so that System administrators can instantly get to know when your system is going through an unexpected behavior.
View Logs Using the Log Viewer - Current Log

View Logs Using the Log Viewer - Archived Logs


All these rich set of monitoring capabilities can be in built into your deployment using Stratos Distributed Logging system. Where you don’t have to worry about always going to the system administrators for logs whenever something goes wrong in your application :).















Tuesday, September 11, 2012

Sneak Peek at WSO2 BAM 2.0 & How to Install BAM Data Agents in WSO2 Products

In a fast growing company, enterprise data plays a major role when it comes for decision making and other top level business activities. When I say enterprise data, it can be anything which is an asset to your company, for example
  1. It can be the data coming into your system (Employee Information, Product Information Sales data etc)
  2. It can be mediation data (who is accessing your services/application, when and how)
  3. Request,response and faulty count for your services.
These row data can be crucial and also it is very important to analyze and convert these data into information in order provide business intelligence for decision makers.
WSO2 BAM is mostly used in SOA environments because when it comes to SOA environment monitoring your data means monitor your services. Most of your business functionalites are exposed as services, for example if you want to insert set of records to your data base you will expose set of data services to do it or if you want to mediate some services you need to create proxy service. You can monitor these kind of data using WSO2 Buisness activity monitor Server.
If you look at BAM highlevel architecture BAM can be modelled according to three major components
  1. Aggregation – BAM Data Agents which publish data in to BAM Data storage
  2. Analytics - Analyzers which analyzed Data in BAM storage
  3. Presentation – Gadgets which displays key performance indicators of analyzed information


You can monitor any business modeling scenario according to these components and BAM architecture is modelled around these three components.

Aggregation – basically this means getting your data into BAM (capturing/collecting data into BAM). BAM data are sent using events, you can capture important set of data and make it as an event stream. I will give more details on events stream in my next posts :) but for the moment all you need to know is capture many data as possible when you are sending data to BAM more data means more business intelligence.

Analytics – After you capture your data you need to make your data meaningful (basically converting data into information) by analyzing your storage data. In order to do that you need to do some data operation (aggregation, merging, sorting ordering) and ultimately build some key performance indicators which will be needed for business intelligence. In BAM you will be able to write your own analyzers (custom analyzers) using hive/hadoop and also these queries can be saved and scheduled accordingly. We will dig deep into how to write hive queries using bam and how to analyze data and create KPIs using data in future.

Presentation – Once you analyze your data, you might need to visualize your KIPs (using bar charts tables) and build your own dashboards in order to visualize your information to decision makers. And not just visualizing you might need to send these analyzed data to needed parties using reporting alerts daily to the topper management etc. WSO2 BAM will provide rich set of tools to do those task without much coding effort. It provides a gadget editor tool which give you a drag and drop kind of way to create your gadgets from your analyzed data or you can connected to reporting o do some complex event processing and send emails etc just by using set of configurations.

Referance http://mackiemathew.files.wordpress.com/2012/01/bamarchitecture.png

Now that we discussed the main architecture behind BAM and also bit about the functionalities of BAM (you can go through BAM samples and install bam samples in order to get in depth knowledge on BAM)

Lets see how we can use BAM to monitor your services.

For this I am going to use WSO2 ESB and we will see how we can monitor service statistics using BAM.

First we need to install BAM data agents inside ESB.

How to install BAM data agents in ESB?


Download P2 Profile which will contain BAM Data agent features. Download WSO2 ESB Latest packs. Go to ESB_HOME/bin and start up the server.
Login to Management Console of WSO2 ESB using user credentials (default is admin, admin).  

Go to Configure -> Features and Click on Add Repository.

Click on add Repository, You need to give a meaningful name for the repository name, I am going to give it as My_REPO and give the path to your P2 Profile and click on add.

Once you add the repository you will be redirected to a page which contains available features. You need to install BAM data agent features therefore, type BAM on Filter by feature name and untick group feature by category and click on find.

Tick on following features BAM Mediation Data Agent Aggregate, BAM Mediator Aggregated,BAM Service Data Agent Aggregate and click install.


Read and accept  License Agreement and click next. Once the installation is finished restart the server.

Once you have installed the BAM data agents you will see newly installed features under Configure.
1. Service Data Publishing, 2. Mediation Data Publishing 3. BAM Server Profiling.

How to Configure Service Data Agent ?


You can click on Service Data Publishing to send your Service Data to WSO2 BAM. In order to do that you need to Download and Start up BAM (if you are running ESB on the same machine you need to change the offset{BAM_HOME->repository->conf->carbon.xml set offset to 1}.

Go to ${WSO2ESB_HOME}/repository/conf/etc and open bam.xml file. Please make sure that you enable the ServiceDataPublishing.
The bam.xml should have the following configuration:




   <BamConfig>
        <ServiceDataPublishing>enable</ServiceDataPublishing>
   </BamConfig>


NOTE: By default the this is disabled, and the BAM publishing won't occur even though you proceed the below steps and changed in the mentioned UI. Therefore please enable the ServiceDataPublishing to use with WSO2 BAM.
Then in the ESB side go to Service Data Publishing, Give the BAM Server URL which will be the thrift ports started in BAM server (tcp://localhost:7611), give BAM credentials (default is admin,admin)  and click on update.

Now if you invoke some services from ESB side the service data will get published to bam. Now to check to confirm whether your data is in BAM you can go to BAM Management Console Check it by connecting to Cassandra Explorer.

In this post I have only explained an intro to BAM 2.0 and how to send service data to bam From my next session I will explain how to send your enterprise data to BAM and how you can do analytics and presentation using WSO2 BAM tool kits.

Friday, August 24, 2012

How to send mails using WSO2 ESB (sending your payload data to e-mail)

Suppose you have a web service which returns some data and you want to send that data into a mail... WSO2 ESB provides an easy mechanism to do this by creating a proxy service.

First get your endpoint (web service endpoint which returns data ).

Step 1 - Enable mail transport in axis2.xml.

Go to your axis2.xml (under WSO2ESB_HOME/repository/conf/axis2/), and uncomment mailto transportSender as shown below.

 <transportSender name="mailto" class="org.apache.axis2.transport.mail.MailTransportSender">
        <parameter name="mail.smtp.host">smtp.gmail.com</parameter>
        <parameter name="mail.smtp.port">587</parameter>
        <parameter name="mail.smtp.starttls.enable">true</parameter>
        <parameter name="mail.smtp.auth">true</parameter>
        <parameter name="mail.smtp.user">synapse.demo.0</parameter>
        <parameter name="mail.smtp.password">mailpassword</parameter>
        <parameter name="mail.smtp.from">synapse.demo.0@gmail.com</parameter>
  </transportSender>

Step 2 Creating the proxy Service.

Go WSO2 ESB, start up the server. Under Main menu -> Axis2 Services -> Add -> Proxy Service. Click on Custom Proxy Service. Give an appropriate Proxy Service Name. (Give the WSDL URL as your requirement)

Step 3 - Define Endpoint

In the define endpoint section click on define inline, click on create -> Address Endpoint and give the address end point of your service which you get data from. I am going to invoke the Data Service which i created in my previous blog. And my Address endpoint will be



Step 4 - Creating the Out Sequence. Go to Out sequence -> Define Inline

In the In Out sequance We need to add a
A Log Mediator - Which logs the incoming message
Three Property Mediators
    1. Subject - Which will be the subject of our mail
    2. MessageType - Message Type of our mail
    3. ContentType - Content Type of our mail
    4. Send mediator - Send the mail to our email address

To add the log mediator, Click on add child -> Core -> Log.  And in the log mediator give the Log level as full.

To add the three properties,  Click on add child -> Core -> Property. And add three properties one by one.
property name="Subject" value="CEP Event" scope="transport" type="STRING"
property name="MessageType" value="text/html" scope="axis2" type="STRING"
property name="ContentType" value="text/html" scope="axis2"  type="STRING"

To add the send mediator Click on add child -> Core -> Send. And Select Endpoint Type -> Define inline -> Address Endpoint -> mailto:amani.soysa@gmail.com

Once you add the three properties and the log mediator your out sequance editor should look like below.



Click save and close and finish creating your proxy Service.


<proxy xmlns="http://ws.apache.org/ns/synapse" name="MyMailProzy" transports="https,http" statistics="disable" trace="disable" startOnLoad="true">
   <target>
      <inSequence />
      <outSequence>
         <log level="full" />
         <property name="Subject" value="CEP Event" scope="transport" />
         <property name="MessageType" value="text/html" scope="axis2" type="STRING" />
         <property name="ContentType" value="text/html" scope="axis2" />
         <property name="OUT_ONLY" value="true" scope="default" type="STRING" />
         <send>
            <endpoint>
               <address uri="mailto:amani.soysa@gmail.com" />
            </endpoint>
         </send>
      </outSequence>
      <endpoint>
         <address uri="http://localhost:9765/services/PersonsDataService" />
      </endpoint>
   </target>
   <publishWSDL uri="http://localhost:9765/services/PersonsDataService?wsdl" />
   <description></description>
</proxy>



You can invoke your proxy service by going to try it in the service list, and you should get a mail according to your payload. My next blog post will show how we can receive emails to WSO2 ESB using a standard email client.






Thursday, August 23, 2012

A Song best suited for WSO2 ESB!! :)


Who - Magic Bus 


Songwriters: PETER TOWNSHEND

Every day I get in the queue (Too much, the Magic Bus)
To get on the bus that takes me to you (Too much, the Magic Bus)
I'm so nervous, I just sit and smile (Too much, the Magic Bus)
Your house is only another mile (Too much, the Magic Bus)
Thank you, driver, for getting me here (Too much, the Magic Bus)
You'll be an inspector, have no fear (Too much, the Magic Bus)
I don't want to cause no fuss (Too much, the Magic Bus)
But can I buy your Magic Bus? (Too much, the Magic Bus)
Nooooooooo!

I don't care how much I pay (Too much, the Magic Bus)
I wanna drive my bus to my baby each day (Too much, the Magic Bus)
*[Magic Bus, Magic Bus, Magic Bus
Magic Bus, Magic Bus, Magic Bus
Give me a hundred (Magic Bus)
I won't take under (Magic Bus)
Goes like thunder (Magic Bus)
It's a bus-age wonder (Magic Bus)

Magic Bus, Magic Bus, Magic Bus, Magic Bus
I want it, I want it, I want it...(You can't have it!)
Think how much you'll save...(You can't have it!)]
I want it, I want it, I want it, I want it ... (You can't have it!)

Thruppence and sixpence every day
Just to drive to my baby
Thruppence and sixpence each day
'Cause I drive my baby every way

Magic Bus, Magic Bus, Magic Bus, Magic Bus, Magic Bus...
I want the Magic Bus, I want the Magic Bus, I want the Magic Bus...

I said, now I've got my Magic Bus (Too much, the Magic Bus)
I said, now I've got my Magic Bus (Too much, the Magic Bus)
I drive my baby every way (Too much, the Magic Bus)
Each time I go a different way (Too much, the Magic Bus)
I want it, i want it, I want it, I want it ...

Every day you'll see the dust (Too much, the Magic Bus)
As I drive my baby in my Magic Bus (Too much, the Magic Bus)

Wednesday, August 22, 2012

How to use WSO2 Payload mediator - Calling data service insertion using payload mediator

In my previous blog I showed how to use an iterate mediator to iterate through a soap message. In this post I am going to explain how WSO2 ESB payload mediator works.

Lets say you have a service which provides set of data and you want to call a data service insert operation. This message is generated from a data service, which access a database table and get set of records from the database. Please refer "How to create a MYSQL data service using WSO2 data services Server" If you want to create the data service and generate the below Request,

<Keys xmlns="http://ws.wso2.org/dataservice">
<Key>
    <P_Id>1</P_Id>
    <LastName>Soysa</LastName>
    <FirstName>Amani</FirstName>
    <Address>361 Kotte Road Nugegoda</Address>
    <City>Colombo</City>
 </Key>
 <Key>
    <P_Id>2</P_Id>
    <LastName>Bishop</LastName>
    <FirstName>Peter</FirstName>
    <Address>300 Technology BuildingHouston</Address>
    <City>London</City>
 </Key>
 <Key>
    <P_Id>3</P_Id>
    <LastName>Clark</LastName>
    <FirstName>James</FirstName>
    <Address>Southampton</Address>
    <City>London</City>
 </Key>
 <Key>
    <P_Id>4</P_Id>
    <LastName>Carol</LastName>
    <FirstName>Dilan</FirstName>
    <Address>A221 LSRC Box 90328 </Address>
    <City>Durham</City>
 </Key>
</Keys>

Lets see how we can use this data set which will come to WSO2 ESB as a soap request and we need to extract soap payload data and send them to a data service. First we need to use the iterate mediator  which will iterate the soap request if you have more than one data set. And we need to create the data service soap request using the payload mediator.

   <payloadFactory>
  <format>
     <p:InsertPerson xmlns:p="http://ws.wso2.org/dataservice">
        <p:P_Id>?</p:P_Id>
        <p:LastName>?</p:LastName>
        <p:FirstName>?</p:FirstName>
        <p:Address>?</p:Address>
        <p:City>?</p:City>
     </p:InsertPerson>
  </format>
  <args>
     <arg expression="//P_Id/text()" />
     <arg expression="//LastName/text()" />
     <arg expression="//FirstName/text()" />
     <arg expression="//Address/text()" />
     <arg expression="//City/text()" />
  </args>
</payloadFactory>

Once we create the payload mediator then we can create a send mediator to insert data to data service

  <send>
     <endpoint>
        <address uri="http://localhost:9765/services/MyFirstDSS/" />
     </endpoint>
  </send>

When you add everything together your proxy service will look like shown below.

<proxy xmlns="http://ws.apache.org/ns/synapse" name="AssetProxyService" transports="https,http" statistics="disable" trace="disable" startOnLoad="true">
  <target>
     <inSequence>
         <iterate xmlns:m="http://ws.wso2.org/dataservice" id="iter1" expression="//m:Keys/m:Key">
           <target>
              <sequence>
                 <payloadFactory>
                    <format>
                       <p:InsertPerson xmlns:p="http://ws.wso2.org/dataservice">
                          <p:P_Id>?</p:P_Id>
                          <p:LastName>?</p:LastName>
                          <p:FirstName>?</p:FirstName>
                          <p:Address>?</p:Address>
                          <p:City>?</p:City>
                       </p:InsertPerson>
                    </format>
                    <args>
                       <arg expression="//P_Id/text()" />
                       <arg expression="//LastName/text()" />
                       <arg expression="//FirstName/text()" />
                       <arg expression="//Address/text()" />
                       <arg expression="//City/text()" />
                    </args>
                 </payloadFactory>
                 <send>
                    <endpoint>
                       <address uri="http://localhost:9765/services/MyFirstDSS/" />
                    </endpoint>
                 </send>
              </sequence>
           </target>
        </iterate>
     </inSequence>
  </target>
  <description />
</proxy>

Wednesday, August 15, 2012

How to Use Iterator Mediator to Iterate through SOAP message Using WSO2 ESB

Lets say you have a soap response coming from a your service and you need to go through that soap message and get/transform that data and send to another service ... and you have no way of doing it??? WSO2 ESB provide a solution to do this in few easy steps.
Lets look at the following soap message. This message is generated from a data service, which access a database table and get set of records from the database. Please refer "How to create a MYSQL data service using WSO2 data services Server" If you want to create the data service and generate the below response.

SOAP response.



<Keys xmlns="http://ws.wso2.org/dataservice">


  <Key>
     <P_Id>1</P_Id>
     <LastName>Soysa</LastName>
     <FirstName>Amani</FirstName>
     <Address>361 Kotte Road Nugegoda</Address>
     <City>Colombo</City>
  </Key>
  <Key>
     <P_Id>2</P_Id>
     <LastName>Bishop</LastName>
     <FirstName>Peter</FirstName>
     <Address>300 Technology BuildingHouston</Address>
     <City>London</City>
  </Key>
  <Key>
     <P_Id>3</P_Id>
     <LastName>Clark</LastName>
     <FirstName>James</FirstName>
     <Address>Southampton</Address>
     <City>London</City>
  </Key>
  <Key>
     <P_Id>4</P_Id>
     <LastName>Carol</LastName>
     <FirstName>Dilan</FirstName>
     <Address>A221 LSRC Box 90328 </Address>
     <City>Durham</City>
  </Key>
</Keys>

Lets see how we can create a proxy service to get these data and we will log the retrieve data using a log mediator.
Before we start you need to have wso2ESB server and start it up. And login to ESB management console.

Step 1 -  Creating the Custom Proxy - Basics Settings

First give proxy service Name and the targetted WSDL file as shown below. (For the WSDL file you can give WSDL file of the Data service which can be access through the data service-> Service Dashboard)



Step 2 - Defining the Endpoint

Once you are done with basic settings then you need to click on next and Define the endpoint. Click on define endpoint inline  and give the address end point of the created data service.

Step 3 - Defining the Out Sequence.

You need to put the iterator mediator inside the out sequence in order to iterate through the incoming soap message. Therefore, go to Define Out Sequence -> Define Inline -> Add 
Click on Add child ->Advance -> Iterate. 


When you scroll down you will get a wizard where you need to enter some information. Give 
Iterate ID - iter1
Iterate Expression*  //m:Keys/m:Key { Dont forget to add the name space to the iterate expression}  
Namespace prefix m 
Namespace url  http://ws.wso2.org/dataservice    

  
Lets add some log mediators to print the data as shown below (to add log mediator got to add child under target click on core and log).

Once you have added the log mediator and a send mediator inside the target. Once you have done it your design view editor should look like below.



Click on save and close to go back to the proxy service editor and click on finish to finish creating the proxy service.

Once you create the proxy service, your synapse configuration should look like below.


<proxy xmlns="http://ws.apache.org/ns/synapse" name="IteratorProxyService" transports="https,http" statistics="disable" trace="disable" startOnLoad="true">

  <target>
     <outSequence>
        <iterate xmlns:m="http://ws.wso2.org/dataservice" id="iter1" expression="//m:Keys/m:Key">
           <target>
              <sequence>
                 <log level="custom">
                    <property name="P_Id" expression="//m:P_Id/text()" />
                    <property name="LastName" expression="//m:LastName/text()" />
                    <property name="FirstName" expression="//m:FirstName/text()" />
                    <property name="Address" expression="//m:Address/text()" />
                    <property name="City" expression="//m:City/text()" />
                 </log>
                 <send />
              </sequence>
           </target>
        </iterate>
     </outSequence>
     <endpoint>
        <address uri="http://localhost:9765/services/MyFirstDSS/" />
     </endpoint>
  </target>
  <description></description>
</proxy>

If you go to the try it feature inside the service dashboard of your proxy service. You can invoke GetPeople operation to test your proxy service.
Once you invoke that following log will be logged in to your console window

[2012-08-15 14:37:59,078]  INFO - LogMediator P_Id = 1, LastName = Soysa, FirstName = Amani, Address = 361 Kotte Road Nugegoda, City = Colombo
[2012-08-15 14:37:59,084]  INFO - LogMediator P_Id = 2, LastName = Bishop, FirstName = Peter, Address = 300 Technology BuildingHouston, City = London
[2012-08-15 14:37:59,089]  INFO - LogMediator P_Id = 3, LastName = Clark, FirstName = James, Address = Southampton, City = London
[2012-08-15 14:37:59,092]  INFO - LogMediator P_Id = 4, LastName = Carol, FirstName = Dilan, Address = A221 LSRC Box 90328 , City = Durham