Friday, September 11, 2020

SOA Scenario

Q) You have three physically different databases, one for each geography - APAC, US, Europe. You are getting customer data. Based on the location, you want to insert into the appropriate database. How can this be achieved? Discuss possible approaches and benefits and drawbacks

We can use the following approaches:

Approach 1
Use a mediator and do content-based routing. 
Each DB has its JNDI set

Approach 1 Pros

1. Easy to change rules
2. Easy to add new targets
3. User id passwords don't get exposed as we can use JNDI

Approach 1 Cons

1. If we need to add a new target, it cannot be done ³dynamically´. We need to manually go and add a new routing rule
2.Use a bpel, and use dynamic partner link. The DB parameters are passed as input (or read from some source like DVM)

Approach 2

Use a bpel,and use dynamicpartnerlink. 
The DB parameters are passed asinput (or read from some source like DVM) 

Approach 2 Pros

1. A new target database can be referred to very easily, as we just need to change the inputs

Approach 2 Cons

2. UserId and pwd is exposed unless you write java code that retrieves user id and PWD from JNDI
3. If you use the java code, then you need to use

Q) I want to design a system that would process orders during the night and despatch them to the suppliers. However, I want the system to be capable of accepting orders all throughout the day. What kind of architecture do you propose

We need to design a system with asynchronously built into it.

We can go for storing the messages in queues.

However, EDN would be an even better alternative

Q) I have 5 departments who want to communicate Customer data. Each team is interested in some common attributes and some attributes that are specific to their application.The provider application serves only the relevant fields to the departments. What kind of problems do you fore see and what can be a solution.

As each system requires a different set of data, there would be a problem of developing large number of transformations if we start connecting each system to every other system. There would be n(n-1)/2 transformations required

Using a canonical data model to convert an application-specific message to a common message can solve the issue.

Canonical Data Model is independent of any specific application. Require each application to produce and consume messages in this common format.

Canonical Data Model defines a common architecture for messages exchanged between applications or components.


Q.) What happens if A inserts data in DB, then calls B synchronously, B inserts a row in DB and then throws a remote exception
The default behavior is that they don't. B shall create its own transaction.
A handles the error. Note, B was in a different transaction than A. So B rolls back as it has an unhandled exception. So insert of A stays, insert of B is rolled-back.

Q.) If A calls B, both BPEL composites, do they share a transaction How can you make B share a transaction
What happens if A inserts data in DB, then calls B synchronously, B inserts a row in DB and then throws a remote exception
Transactions

If A calls B, both BPEL composites, do they share a transaction
The default behavior is that they don't. B shall create its own transaction.

How can you make B share a transaction?

It can be changed by putting transaction=required in B  composite.xml. 
Default is transaction=requiredNew

Q.) When you work with inbound JCA adapters (for instance JMS, AQ, MQ) in SOA Suite, you need to be able to control the TPS (transactions per second). This will allow you to avoid stuck threads, in case you receive unexpected batch messages in the domain load tuning.
1. Add the minimumDelayBetweenMessages in the code
This is the parameter that will do the magic. It adds a thread sleep as part of the instance execution—that is, on a per polling thread basis. The setting is measured in milliseconds.

2. How to do that?
The property will be added to the SCA file (composite.xml) in your project. It is a <Service> properties related to the partner link adapter that connects in your JCA resource.
Note: it is not a binding property. 


When you deploy your code in the SOA server, you will be able to check the value in the JCA adapter using EM console. You can also change it on the fly, in case you need to tune the value and test it.

Consider a scenario where you need to control throttling in a domain that consists of:
·         4 managed servers nodes
·         2 threads per adapter (set in InboundThreadCount parameter)
·         500 milliseconds (0.5 seconds) set in minimumDelayBetweenMessages

To calculate the TPS, use this formula:
(MS nodes * thread count) / (minimumDelay/1000)
In this example, you will have (4 * 2) = 8 threads connected in the queue. Each thread will delay 500/1000 = 0.5 seconds before pushing a new message. You will have a max 16 TPS.

  This means that when you set the minimum delay, you will also set the maximum TPS. This can be lower in case your thread transaction execution time is higher than the minimum delay parameter.

Q) Transaction Timeout Values in Durable Synchronous Processes
You can specify transaction timeout values with the property SyncMaxWaitTime in the System.
-> The client thread waits for the SyncMaxWaitTime second's value. If this time is exceeded, then the client thread returns to the caller with a timeout exception.
-> If the wait is less than the SyncMaxWaitTime value, the asynchronous background thread then resumes at the wait and executes the reply

You can specify transaction timeout values with the property SyncMaxWaitTime in the System 
"(i.e- EM -> Soa-infra-> SOA Infrastructure -> SOA Administration > BPEL Properties-> 
SyncMaxWaitTime: Value -> Apply-> Return)."

You can specify transaction timeout values with the property SyncMaxWaitTime in the System MBean Browser of Oracle Enterprise Manager Fusion Middleware Control. 

"The SyncMaxWaitTime property applies to durable synchronous processes that are called in an
 asynchronous manner. "

If the BPEL process service component does not receive a reply within the specified time, then the activity fails.

When the client (or another BPEL process) calls the process, the wait (breakpoint) activity is executed. 
". However, since the wait is processed after some time by an asynchronous thread in the background, 
the executing thread returns to the client-side. "

"The client (actually the delivery service) tries to pick up the reply message, but it is not there since the reply activity in the process has not yet executed. "
Therefore, the client thread waits for the SyncMaxWaitTime second's value.
If this time is exceeded, then the client thread returns to the caller with a timeout exception.
".If the wait is less than the SyncMaxWaitTime value, the asynchronous background thread then resumes
 at the wait and executes the reply."
The reply is placed in the HashMap and the client thread is notified. 
The client thread picks up the reply message and returns.

"Therefore, SyncMaxWaitTime only applies to synchronous process invocations when the process 
has a breakpoint in the middle. "
. If there is no breakpoint, the entire process is executed by the client thread and returns the reply message.

If the BPEL process service component does not receive a reply within the specified time, then the activity fails.
Note: While it is not recommended to have asynchronous activities inside asynchronous process, BPEL does not prevent this type of design.

When the client (or another BPEL process) calls the process, the wait (breakpoint) activity is executed.

The client (actually the delivery service) tries to pick up the reply message, but it is not there since the reply activity in the process has not yet executed. 
Therefore, the client thread waits for the SyncMaxWaitTime seconds value. 
. If this time is exceeded, then the client thread returns to the caller with a timeout exception.

If the wait is less than the SyncMaxWaitTime value, 
the asynchronous background thread then resumes at the wait and executes the reply. 
The reply is placed in the HashMap and the client thread is notified. 
The client thread picks up the reply message and returns.

Therefore, SyncMaxWaitTime only applies to synchronous process invocations when the process has a breakpoint in the middle. 
If there is no breakpoint, the entire process is executed by the client thread and returns the reply message.
 
Q) what is direct binding?
SOA-DIRECT provides native connectivity between Oracle Service Bus and Oracle SOA Suite service components.

"Oracle SOA Suite provides a ""direct binding"" framework that lets you expose Oracle SOA Suite service components in a composite application, and the Oracle Service Bus SOA-DIRECT transport interacts with those exposed services through the SOA  direct binding framework,"

Also, it is noteworthy that SOA-DIRECT supports the following features:

1.    Invocation of any SOA binding component services through Java RMI and optimized RMI transport for invoking SOA services.

2. WS-Addressing, including optional auto-generation of ReplyTo properties for asynchronous callbacks.

3. Identity and transaction propagation.
4. Attachments.
5. High availability and clustering support.
6. Failover and load balancing.
7. Connection and application retries on errors.   

Q) what is direct binding?

The direct reference binding component provides support for sending SOA messages directly to external services over RMI.

SOA-DIRECT provides native connectivity between Oracle Service Bus and Oracle SOA Suite service components.

Direct binding enables Java clients to directly invoke composite services, bypassing the intermediate conversion to XML required with web service binding.

Inbound direct binding

The direct service binding component allows an external client to send messages using the Direct Binding Invocation API, where the Direct Binding Invocation API takes the JNDI connection parameters and creates a connection object on behalf of the client.

Outbound direct binding (or direct reference binding)

The direct reference binding component provides support for sending SOA messages directly to external services over RMI. These external services must implement the SOA invocation API, the same as the direct inbound invocation API.

SOA-DIRECT provides native connectivity between Oracle Service Bus and Oracle SOA Suite service components.

Q.) Your composite will have 2 sections - reference & service binding. 

Your reference will point to abstract WSDL which is what is used while loading composite references during startup. 

Service binding in your SOA composite will have the actual concrete WSDL.

Binding Components

Binding components establish a connection between a SOA composite and the external world.

 There are two types of binding components:

Services

Services provide the outside world with an entry point to the SOA composite application.

By using WSDL.

"The binding connectivity of the service describes the protocols that can communicate with the service, 

for example, SOAP/HTTP or a JCA adapter."

References

References enable messages to be sent from the SOA composite application to external services in the outside world.

·         Service (default)
Creates a web service to provide an entry point to the SOA composite application
·         Reference
Creates a web service to provide access to an external service in the outside world

Q) What is the difference between Service component, Service binding and reference binding
Service components are the building blocks that you use to construct a SOA composite application. 
Examples -BPEL, Human Task, BusinessRules, Mediators, Spring.

Binding component establish a connection between a SOA composite and the external world. 

They are categorized as Service binding component and Reference binding components.
->  Service binding components provide the entry point to the composite
-> Reference binding components provides access to the external service in the outside world.
 Examples include JCA adapters, HTTP binding, Direct binding etc.

Q) Can a composite have multiple service bindings
Yes, there can be multiple service bindings for a composite

Q) what is the disadvantage of having concrete WSDL in the MDS instead of abstract. 

so that I no need to remove the binding and service parts of the wsdl before I store them in the MDS.

At design/load time, the composite only needs types, messages and port related information

 and is already available in the concrete wsdl stored in the MDS.

 Q) What is the disadvantage of having binding and service information in the MDS wsdl?

-> Think of a situation where the service has been moved/upgraded - typically in such scenario the service endpoint changes but contract still remains the same. 

->  We don't want to store the concrete wsdl (with actual service endpoint) in MDS, hence we decouple the service from its contract - Abstract WSDL helps with this design.

->  Your SOA composite will have the reference pointing to abstract WSDl and the service binding will reference the concrete WSDL.

Q) What is the difference between concrete and abstract wsdl?

Concrete: Besides the information about how to communicate to the web service, it the information on where the service exist. 

It has Bindings (Protocol the message should be sent) and Services(has endpoint for each bindings) .

->  used in client side
->  Binding and service

Abstract: It has information about how to communicate to the web service like types (Schema), Message (input and output messages service accepts) ,

Operations (operation that can be performed on this service) and port Type.

-> Used in server side 
-> Reusable 
-> No binding details (binding & service)
-> Type, message, port type defined 

Abstract Definitions:-

1). TYPES:  Machine-and language-Independent type  definitions.

2). MESSAGES:- Contains function parameters (Input are separate from outputs) or 
document  descriptions.

3 ). PORT TYPES:-  Refers to message definitions in messages section that describe function signatures(Operation name, Input parameters, Output parameters).

Concrete Description:-

It contains  Abstract WSDL + Bindings and Services

 Bindings: Specifies bindings of each operation in the portType section.

 Services:- Specifies port address of each binding.

Q). What is logical delete ?
" When we poll for new or changed records using database adapter, 
we have multiple options after reading the data. Either we can delete the row from table (physical delete) which is not recommended or we can update the column in source table with new value (logical delete)."
"E.g. Source table has one column with the name flag, we read the data from source table when flag column 
has value “N” and after we read the data then we update the flag value to “Y”."

Logical Delete:
logical delete thus avoids DB adapter to pick it up the same record again and again.
where we update a status column to mark records as read rather than deleting them. 
<property name="MarkReadColumn" value="MARKCOLUMN"/>
 <property name="MarkReadValue" value="READ"/>
 <property name="MarkReservedValue" value="R${weblogic.Name-2}-${IP-2}"/>
 <property name="MarkUnreadValue" value="UNREAD"/>

Q) What is polling? How many ways we can do polling with DB Adapter?

A very useful feature of Oracle data base adapter is polling.

It tell us about any changes in particular table on which we want to poll.

Using this feature we can do lot of things according to our logic and our requirement. 

DB Adapter polling tricks

The commonly used polling strategies with Oracle DB adapter are: 

 DeletePollingStrategy

 This is simplest where we read all possible rows from table and delete them afterwards to ensure that they are only read once. 

 LogicalDeletePollingStrategy

This is a non-intrusive polling mechanism where we update a status column to mark records as read rather than deleting them. 

This uses 2 SQLs

One for polling/reading the records and

Another after-read SQL to update/mark the records as read.

There may be cases where you would like to control the number of DB records which are polled at a time. 

Polling frequency           

Polling frequency is the interval at which the DB adapter activation agent polls the new records.             

"Database Rows per Transaction"           

Database Rows per Transaction (default value of 10) controls the number of records which are read at a time.

"For eg. if there are 1000 records to be read and we set the Database Rows per Transaction=10 , at the start of the polling  interval the entire work is divided into 1000/10=100 transaction units and completes sequentially till all are processed."

This property resolves to MaxTransactionSize in the jca file.                      

Distributed Polling

 If we enable the distributed polling checkbox and set the MaxTransactionSize the behaviour changes. 

 Here the entire work is divided into 100 transaction units but each unit is processed in a single polling interval.  i.e 1st polling interval 10 records are processed, rest 990 will be processed in subsequent intervals.               

A widely used polling strategy

"when you have a clustered environment where multiple nodes are polling for the same data it is likely that the same record will be processed more than once.  

To avoid this problem, the DB Adapter has a Distributed Polling technique that utilizes an Oracle Database feature.               

If distributed polling is not set, then the adapter tries to process all unprocessed rows in a single polling interval.

Q). Can we change the file name and directory path at runtime .
" yes, we can change the file name and directory path at run time,
 for that open the Invoke activity used to invoke file/ftp adapter then go to Properties tab and update 
the following properties for the same."
jca.file.FileName/jca.ftp.FileName
jca.file.Directory/jca.ftp.Directory

Q.) How we read large files using these adapters?
We can use streaming option to read large files using these adapters. We can also read the files in attachment form.

Q.)Can we read the file without reading its content?

Yes, we can read the file without reading its content by selecting “Do not read file content” check box.

Q.)What is the difference between Read and Sync Read operation ?
" We go for Read operation when we need to perform polling i.e. our process start with polling for file and
Sync Read operation is used when you need to read the file in between the flow ."

Q) What is a syncFileRead operation? Is a inbound or a outbound operation? Can my process begin with syncFileRead operation?
"When file has to be read in the mid of the BPEL process, then we will use syncFileRead Operation, means some process should initiate  the file read process and it is an outbound operation and process can’t begin with Sync File read."

Q.)What is difference between Physical and Logical Directory ?
Below are the major differences between Physical and Logical paths.
Physical Path
1. As name suggest, we mention actual full path (physical) of directory
2. Not Flexible
3. We need to manual change this when difference environments has different paths
Logical Path1. Here we can mention any logical name and actual value of that path define in Composite.xml file.
2. Flexible as we can change it from EM console.
3. We can easily replace this path with the help of Config plan if we have difference paths in different environments.

Q.) File & FTP adapters are known and Transactional or Non-Transactional adapters?
These adapters are known as non-transactional adapters as these adapters does not support transactions.

Q.) File Debatching
When a file contains multiple messages, you can choose to publish messages in a specific number of batches. This is referred to as debatching.
During debatching, the file reader, on restart, proceeds from where it left off in the previous run, thereby avoiding duplicate messages.
File debatching is supported for files in XML and native formats.

Q) Processing large files through SOA Suite using Synchronous File Read
You don’t want to read the huge file into memory and then process it. Preferable you process it in smaller chunks. Chunking the file using the “Read File” option of the file-adapter is pretty straight forward, all you need to do is to specify the publish size. Working with chucks for the “Synchronous Read File” option used from BPEL is less easy. 

Q). How to Poll Single file from FTP/File location which has multiple files ?

Let’s take one example if we have include file wild card as *.txt and there are 2 files available at File/FTP Location.

So read operation will read both the file and create two instance and created time will be same for both the instance.

To avoid such scenario and read only one file at a time, we need to use below property in jca file. 

<property name="SingleThreadModel" value="true"/>
      <property name="MaxRaiseSize" value="1"/>


Q) Read large XML files in chunks
chunking is different than debatching
when you debatch your file you actually create multiple instances for the file. 
However, when you chunk read you actually read your whole file in chunk within a single instance.

Q) What is a nonBlockingAll property?
Non- blocking invoke is used when Parallel flow needs to be executed where new thread will be created for each invoke activity and which will execute simultaneously.

Q) What is singleton Property in SOA?
In the clustered environment when the processing of the message should happen via only one SOA managed server, then the property singleton needs to be defined at the adapter level.

Q) Singleton property in DB Adapter
Use of Singleton property in DB adapter. In clustered environments when we deploy a composite which is using a DB adapter pooling. DB Adapter starts polling Staging table(EMP) in parallel with the interval. ... For resolving this issue we need to set the singleton property in composite.

Q.) In clustered environments when we deploy a composite which is using a DB adapter pooling.
DB Adapter starts polling Staging table(EMP) in parallel with the interval. Both the servers in the Cluster are starts initiating DB Adapter instances in parallel and if we increase the servers form 2 to 3. In the same way three instances of pooling components started.
For resolving this issue we need to set the singleton property in composite
We have to set the Singleton Property in composite.xml
<binding.jca config=”binding_db.jca”>
<property name=”singleton”>true</property>
</binding.jca>

Q) The issue starts coming in our DB Adapter that now the DB Adapter starting polling the Staging table(OEBS)  in parallel with the initialized interval. Cluster 1 and Cluster 2 start initiating DB Adapter instances in parallel and if we increase the clusters form 2 to 3. In the same way three instances of pooling components started. This leads to a situation in which if we increase the clusters in the future from 2 to 50 then in parallel 50 instances of pooling component will be created in parallel and will hit the third party application in parallel. This could be a problem for a target system if it doesn't support parallel processing
The solution to this problem was that we need a concept of singleton pattern in our pooling component / DB Adapter
There's a property of Inbound endpoint life-cycle support within Adapters called Singleton.
To enable this feature for high availability environment for a given inbound adapter endpoint, one must add the singleton JCA service binding property in the composite.xmlwithin the <binding.jca> element and set it to a value of true 

Q) Distributed Queue: 
Defines a set of queues that are distributed on multiple JMS servers, but which are accessible as a single, logical queue to JMS clients.

Q) Distributed Topic:
Defines a set of topics that are distributed on multiple JMS servers, but which are accessible as a single, logical topic to JMS clients.

Q.) Distributed Topic and SOA Cluster

If you are creating a topic it can just point to one of the managed servers 
however, the distributed topic can be pointed to multiple servers.

add a singleton property in your SOA process.
<binding.jca>
<property name="singleton">true</property>
</binding.jca>

Q.) Distributed Polling
when you have a clustered environment where multiple nodes are polling for the same data it is likely that the same record will be processed more than once.  To avoid this problem, the DB Adapter has a Distributed Polling technique that utilizes an Oracle Database feature: SELECT FOR UPDATE SKIP LOCKED.

Distributed Polling means that when a record is read, it is locked by the reading instance.

Distributed Polling” will be used during clustered environments. This will avoid the creation of multiple instances of BPEL. This means it will avoid the possibility of picking the same record multiple times in a clustered environment.

Often in production environments, servers run in clustered mode i.e more than one managed server running under one cluster. Generally, if we don't implement distributed functionality then say for e.g we have 5 servers in a clustered environment. Then in a clustered environment in case of polling DB adapter, it is quite possible that all 5 nodes try to poll the same record at the same time, which will result in 5 concurrent instances with the same data. Clearly, we do not want that at all.
When we select Distributed polling while configuring DB adapter, it automatically uses the syntax SELECT FOR UPDATE SKIP LOCKED which means the same row cannot be processed multiple times.

Step 1: Configure distributed polling. The query in the polling database adapter needs to be a distributed polling in order to avoid data duplication

topic and distributed topic
first of all it is important to understand that there is a difference between topic and distributed topic. 
 If you are creating a topic it can just point to one of the managed server
however the distributed topic can be pointed to multiple server.

"While creating a distributed topic, one important point is that make sure you are specifying the forwarding policy as Partitioned otherwise  the message will be replicated to all the servers."

Now you are good from admin side but a task is required from the developer as well that is to add a singleton property in your SOA process.

<binding.jca>
<property name="singleton">true</property>
</binding.jca>

Distributed Polling
"when you have a clustered environment where multiple nodes are polling for the same data it is likely that the same record will be processed 
more than once."
To avoid this problem, the DB Adapter has a Distributed Polling technique that utilizes an Oracle Database feature: 
SELECT FOR UPDATE SKIP LOCKED.

Distributed Polling means that when a record is read, it is locked by the reading instance.

Distributed Polling” will be used during clustered environments.
This will avoid the creation of multiple instances of BPEL. 
This means it will avoid the possibility of picking the same record multiple times in a clustered environment.

Q) InMemoryOptimization
This property indicates to Oracle BPEL Server that this process is a transient process (synchronous) and dehydration of the instance is not required. 
TRUE  Oracle BPEL Process Manager keeps instances in memory only.
false (default)  instances are persisted completely and recorded in the dehydration store database for a synchronous BPEL process.

Q) CompletionPersistPolicy
This property is only used when inMemoryOptimization is set to true.
on (default):  The completed instance is saved normally.
deferred:  The completed instance is saved, but with a different thread and in another transaction, If a server fails, some instances may not be saved.
faulted:  Only the faulted instances are saved.
off: No instances of this process are saved.

<component name="mybpelproc">
...
<property name="bpel.config.completionPersistPolicy">faulted</property>
<property name="bpel.config.inMemoryOptimization">true</property>
...
</component>

What are the BPEL properties that determine how much data to be saved to the database during dehydration process ?
inMemoryOptimization:
Applicable only for TRANSIENT BPEL Process
When property inMemoryOptimization = true, then the process maintained in memory until the process completes
Dehydration doesn’t occurs for this kind of process
completionPersistLevel:
Controls what type of data is saved after process completion
Works only when property inMemoryOptimization = true
When completinPersistLevel = all, saves final variable, audit data, work_item data
When completinPersistLevel = instanceHeader, saves instance metadata
completionPersistPolicy
Controls when to persist the instance
When property set to
Faulted – only faulted instances are saved
On – all the instances are saved
Off – nothing gets saved
Deferred – finished instances are saved
Server fails – some instances may be saved
The above three properties are used together. If used properly, this can reduce the database growth as well as increase the throughput.


Q) What are the BPEL properties that determine how much data to be saved to the database
 during the dehydration process?

inMemoryOptimization:
This property indicates to Oracle BPEL Server that this process is a transient process and dehydration of the instance is not required.
When set to true, Oracle BPEL Server keeps the instances of this process in memory only during the course of execution.
This property can only be set to true for transient processes (process type does not incur any intermediate dehydration points during execution).

· false (default): instances are persisted completely and recorded in the dehydration store database for a synchronous BPEL process.
· true: Oracle BPEL Process Manager keeps instances in memory only.


Applicable only for TRANSIENT BPEL Process
When property inMemoryOptimization = true, then process maintained in memory until process completes 
Dehydration doesn’t occurs for this kind of process

completionPersistLevel:
This property controls if and when to persist instances. If an instance is not saved, it does not appear in Oracle BPEL Console.
This property is applicable to transient BPEL processes (process type does not incur any intermediate dehydration points during execution).
This property is only used when inMemoryOptimization is set to true.
This parameter strongly impacts the amount of data stored in the database (in particular, the cube_instance, cube_scope, and work_item tables). It can also impact throughput.
<component name="mybpelproc">
...
<property name="bpel.config.completionPersistPolicy">faulted</property>
<property name="bpel.config.inMemoryOptimization">true</property>
...
</component>

Controls what type of data is saved after process completion
Works only when property inMemoryOptimization = true
When completinPersistLevel = all, saves final variable, audit data, work_item data
When completinPersistLevel = instanceHeader, saves instance metadata


Controls when to persist the instance
When property set to
Faulted – only faulted instances are saved
On – all the instances are saved
Off – nothing gets saved
Deferred – finished instances are saved
The completed instance is saved, but with a different thread and in another transaction, If a server fails, some instances may not be saved.
The server fails – some instances may be saved
The above three properties are used together. If used properly, this can reduce the database growth as well as increase the throughput.

OneWayDeliveryPolicy
This property controls the database persistence of messages entering Oracle BPEL Server. Its used when we need to have a sync-type call based on a one-way operation. This is mainly used when we need to make an adapter synchronous to the BPEL Process.
By default, incoming requests are saved in the following delivery service database tables: dlv_message

· async.persist: Messages are persisted in the database.
· sync.cache: Messages are stored in memory.
· sync: Direct invocation occurs on the same thread.

<component name="UnitOfOrderConsumerBPELProcess">
...
<property name="bpel.config.transaction" >required</property>
<property name="bpel.config.oneWayDeliveryPolicy">sync</property>
...
</component>

1. If your Synchronous process exceed, say 1000 instances per hour, then its better to set inMemoryOptimization to true and completionPersistPolicy to  faulted, So that we can get better throughput, only faulted instances gets dehydrated in the database, its goes easy on the purge (purging historical instance data from database)

2. Do not include any settings to persist your process such as (Dehydrate, mid-process receive, wait or Onmessage)

3. Have good logging on your BPEL Process so that you can see log messages in the diagnostic log files for troubleshooting.

General Recommendations:
"1. If your Synchronous process exceed, say 1000 instances per hour, then its better to set inMemoryOptimization to true and completionPersistPolicy to faulted, So that we can get better throughput, only faulted instances gets dehydrated in the database, its goes easy on the purge (purging historical instance data from database)"
2. Do not include any settings to persist your process such as (Dehydrate, mid process receive, wait or Onmessage)
3. Have good logging on your BPEL Process, so that you can see log messages in the diagnostic log files for troubleshooting.

Q.) What is a flow activity? What is a flowN activity and how does it leverages the flow activity?
Flow activity is used, when parallel execution of the flow is needed and to use this property “non blocking invoke should be set as true “at the partner link level and no. of execution of parallel flow is defined and static.
 Where as in Flown the no. of execution of parallel flow is not static and it is determined during run time.

Q.)  If BPEL Process is Async or one-way process then, Delivery policy attribute can have three values. Those are
 1)- async.persit  2)- async.cache 3)- sync
async.persist: Messages are persisted in the database hash map.
async.cache: Messages are stored in memory.
sync: Direct invocation occurs on the same thread.

Q.) If BPEL Process is Sync process then, the transaction context between client and bpel-process is controlled by "Transaction" Attribute. 
Values of these attributes are 1)- Required 2)- Requires New

bpel.config.transaction property.
bpel.config.transaction=required
The caller's transaction is joined (if there is one) or a new transaction is created (if there is not one)
Invoked messages are processed using the same thread in the same transaction

bpel.config.transaction=requiredNew
A new transaction is always created and an existing transaction (if there is one) is suspended.

Q.) What is the default level of transaction in a composite <required | requiredNew> ?
default is required

Q.) What is getPreference property? How do we set it and what advantage it provides?
Hard coding is not a good practice, so to avoid hard coding preference variables can be used and the value of the preference variable is accessed using getPreference().The preference variable value can be changed without re-deploying the code via em console MBean property.

Q) OSB and BPEL
"OSB is the light-weight service bus wherever there is not much business logic involves 
and there is a need to just get the message routed between the systems 
OSB is used whereas when there is more business logic involves in the process, then BPEL will be used."
BPEL support multiple transactions.(required and required new) (EX: involve multiple things, DB,JMS,FTP etc), OSB supports single transaction. 
BPEL is stateful and OSB is stateless. 
Compensation logic is not there in OSB.
OSB does not support long running process. (human task)
BPEL handle asynchronous process affectively then OSB.

Q) What Is The Role Of Oracle Mediator?
Oracle Mediator provides a lightweight framework to mediate between various components within a composite application. 
"Oracle Mediator converts data to facilitate communication between different interfaces exposed by different components  that are wired to build a SOA composite application."

Q) Echo in mediator
An Echo is a simple “reply to the caller” pattern where the requester immediately gets back a response.
callable service without having to route it to any other service

For example, you can call an Oracle Mediator to perform a transformation, a validation, or an assignment, and then echo the Oracle Mediator back to your application without routing it anywhere else.
The echo option is available for asynchronous operations only if the Oracle Mediator interface has a callback operation. In this case, the echo is run on a separate thread.

For synchronous operations with a conditional filter, the echo option does not return a response to the caller when the filter condition is set to false. Instead, it returns a null response.

"You can also echo source messages back to the initial caller after any transformation, validations, assignments, or sequencing operations are performed. "
The echo option is only available for inbound service operations and is not available for event subscriptions.
"The purpose of the echo option is to expose all the Oracle Mediator functionality as a callable service without having to route it to any other service. "
For example, you can call an Oracle Mediator to perform a transformation, a validation, or an assignment and then echo the Oracle Mediator back to your application without routing it anywhere else.

Q.) There are two types of routing rules in Mediator
Dynamic routing rule
These rules are used for asynchronous interactions only. Business rules are used in these rules and endpoint is determined at run time from business rules.
A dynamic routing rule lets you externalize the routing logic to an Oracle Rules Dictionary, which in turn enables dynamic modification of the routing logic in a routing rule.
Static routing rule
These rules are used for synchronous and asynchronous interactions and these are defined at design time.
Further routing rules are of two types:
Sequential
As the name suggested, Oracle Mediator evaluates routing and performs the resulting actions sequentially. Sequential Routing Rules execute in a single thread and transaction as the caller. Oracle Mediator always enlists itself into the global transaction propagated through the thread that is processing the incoming message. In case of exceptions, it rolls back all transactions.

Parallel
As names suggest, Oracle Mediator queues and evaluate routing in parallel in different threads. Oracle Mediator initiates a new transaction for processing each parallel rule. In case of exceptions, it rolls back transactions in their own thread.

Q)What Is Resequencing In Mediator?
The resequencing feature of the Oracle Mediator reorders sets of messages that might arrive to the Oracle Mediator in the wrong sequence. You can define resequencing for all operations in an Oracle Mediator or for a specific operation

Q) How does Oracle BPEL identify asynchronous responses?

The answer is "WS-Addressing".

As response from an asynchronous web service is not guaranteed to be received within a specified time frame, and many instances of the same service might be invoked before even a response can be obtained,

how does Oracle BPEL identify and relate the responses to the appropriate requests and proceed for completion of further activities that may be scheduled?

The answer is "WS-Addressing".

Q) WS-Addressing
You can use WS-Addressing to identify asynchronous messages to ensure that asynchronous callbacks locate the appropriate client.
WS-Addressing in an Asynchronous Service
There can be many active instances at any time, the server must be able to direct web service responses to the correct BPEL process service component instance. You can use WS-Addressing to identify asynchronous messages to ensure that asynchronous callbacks locate the appropriate client.

Q) What is WS-Addressing?

WS-Addressing is a transport-neutral mechanism by which web services communicate addressing information.

"SOAP envelopes & headers used within web services for transporting data through transport layers like HTTP does not possess the intelligence to specify unique addressing information."

Hence, WS-Addressing evolved which contained endpoint references (EPR) and message information headers for identification.

This information is processed independently of the transport mechanism or application. 

By default, Oracle BPEL PM implements WS-Addressing for all asynchronous web service calls,

Hence we don't have to explicitly implement identification relationship between the incoming & outgoing messages.

Q) WS-Addressing in an Asynchronous Service?

There can be many active instances at any time, 

"The server must be able to direct web service responses to the correct

 BPEL process service component instance. "

"You can use WS-Addressing to identify asynchronously messages to ensure that asynchronous call-backs locate the appropriate client."

, WS-Addressing evolved which contained endpoint references (EPR) and message information headers for identification. 

By default, Oracle BPEL PM implements WS-Addressing for all asynchronous web service calls,

Hence we don't have to explicitly implement identification relationship between the incoming & outgoing messages.

Q) Use case: In typical business systems like “Order Management System”, we will place an order request and to deliver the ordered items it may take some time. Once the requested items are delivered then the order request will be closed. In mean while we can cancel the order request as well. To cancel the order request, we will be providing the unique data provided while placing the order request.

 Use Correlation

Q). If A call B and B call C and Response would be in Reverse manner. i.e. C respond to B and B respond to A, So how you will respond to A from C without responding B?

Oracle BPEL Correlation Exemplified

In short, Correlation is a BPEL technique which provides correlation of asynchronous messages based on the content of the message. 

Correlation Sets are used to correlate the messages when interacting with the Asynchronous systems.

To identify the messages (requests, delayed responses) in asynchronous communication we can use either

1)      WS-Addressing

2)      Correlation set 

You can use correlation sets in invoke, receive, pick, and reply activities to indicate which correlation sets occur in the messages that are sent and received. 

Whenever a synchronous web service is invoked from Oracle BPEL via a partner link, only one port is established for communication with the exposed web service which is used by both request & response messages. 

However, when an asynchronous web service is invoked, two ports are opened for communication: one for request & one for response messages.

Q) Correlation

Correlation is a technique that helps to correlate messages between a producer and consumer of the messages in an asynchronous transaction. 

Correlation helps in scenarios where the interactions are complex.  

Mid-process receive uses correlation sets to correlate messages between incoming message and running instances that are waiting for a message to continue its operations.  

Correlation sets in Asynchronous service.

It directing webservice responses to the correct BPEL process service component instance. 

You can use the correlation sets to identify asynchronous messages to ensure that

Asynchronous call-back locates the appropriate client. 

Correlation Set

It is used to tie together a partner conversation and are used to associate messages with business processes.

Use correlation sets to ensure that asynchronous callbacks locate the appropriate client. 

Creating Correlation Set

step1:

-> On receive or invoke activity right click setup correlation.

-> In the properties section, click 'Add'

-> Provide a name and type for the property. 

Initiate Attribute: (Value Set - yes, no)

->When set to yes, the correlation set is initiated with the values of the properties available in the message being transferred

->When set to no, the correlation set validates the value of the property available in the message

-> drag and drop editor. 

Pattern Attribute: (Value Set - in, out, in-out)

When the value is 'in', it means that the correlation property is set/validated on the incoming message

When the value is 'out', it means that the correlation property is set/validated on the message going out of BPEL

In case of 'in-out', the property will be set/validated on both incoming & outgoing messages 

Property — A property is an arbitrarily named token. ...

Property alias — A property alias is a rule that tells the BPEL runtime how to map data from a message into property value. ...

Correlation set — A correlation set is a compound key made up of one or more property values, actually, it is a property set.

CORRELATION ID

When an asynchronous service is initiated with the invoke activity, a correlation ID unique to the client request is also sent (using WS Addressing). Because multiple processes may be waiting for service call-backs.

Oracle BPEL Server must know which BPEL process instance is waiting for a call-back message from the loan application approver Web service.

The correlation ID enables Oracle BPEL Server to correlate the response with the appropriate requesting instance

Suppose we have asynchronous calls: like call 1 call2, call 3 , call 4 , call 5 and BPEL is waiting for a response. There is a chance that call3 response call came before call2, or call5 response came before the call 1.In order to control and map source and response messages correlation id is used. 

Here we have a few properties whose meaning are explained below:

Initiate = yes  --->  this means that we assign the value to correlation variable.

Initiate=no --->    this means we want to validate the value against, the value stored in correlation variable.

Pattern = in ----> initiated with something that is coming into the BPEL.

Pattern= out -----> initiated with something that is going out of BPEL.

Pattern= in/out -----> initiated with something that is going in and out of bpel. 

Why & Where Correlation?

When using an asynchronous service that does not support WS-Addressing.

When receiving unsolicited messages from another system.

When the message travels through several services and the response is solicited by the initial service from the last service directly.

When the conversation is in the form A > B > C > A instead of A > B > A.

When communicating through files.

Q) Creating a Parallel Flow
You can create a parallel flow in a BPEL process service component with the flow activity.
The flow activity enables you to specify one or more activities to be performed concurrently. 
The flow activity also provides synchronization. 
The flow activity completes when all activities in the flow have finished processing.

Note: Branches in a flow activity are executed serially in a single thread.
A flow activity typically contains many sequence activities. 
Each sequence is performed in parallel.

Q) Parallel Flow in a BPEL Process
Parallel flows are especially useful when you must perform several time-consuming and independent tasks.
A BPEL process service component must sometimes gather information from multiple asynchronous sources. Because each callback can take an undefined amount of time (hours or days), it may take too long to call each service one at a time. By breaking the calls into a parallel flow, a BPEL process service component can invoke multiple web services at the same time, and receive the responses as they come in. This method is much more time-efficient.

 Q) Execution of Parallel Flow Branches in a Single Thread
Branches inflow, flowN, and forEach activities are executed serially in a single thread.
To achieve pseudo-parallelism, you can configure invoke activities to be nonblocking with the nonBlockingInvoke deployment descriptor property. When this property is set to true, 
The process manager creates a new thread to perform each branch's invoke activity in parallel.

Q.) Compensate Handler
It is used to invoke a compensating sequence of activities as a result of a fault or execution of a compensate handler.

Q.) There are two popular ways of securing the so-called webservices and as to how do we invoke them from BPEL??? 
BasicServices: 
Basic services would require authentication information i.e., the username/password to be passed in  the HTTP Header.
Services Pertaining to WS-Security:
These services are required to send authentication information (username/password) as WS-Security tokens in SOAP Envelope to access.

Q.) How can one add the  HTTP Authentication in BPEL:
Add the following properties in deployment descriptor i.e., the bpel.xml, under the partner link for that service.
<property name="httpHeaders">credentials</property>
<property name="httpUsername">manojnair</property>
<property name="httpPassword">hello@123</property>

Adding WS-Security tokens in BPEL:
<property name="wsseHeaders">credentials</property>
<property name="wsseUsername">manojnair</property>
<property name="wssePassword">hello@123</property>
thats it .... you are done once you embed this into your code.
This is how one can invoke secure services.

<property name="weblogic.wsee.wsat.transaction.flowOption" type="xs:string" many="false">WSDLDriven</property>
<property name="oracle.webservices.auth.username" type="xs:string" many="false" override="may">bpel</property>
<property name="oracle.webservices.auth.password" type="xs:string" many="false" override="may">welcome1</property>
<property name="oracle.webservices.auth.password" type="xs:string" many="false" override="may">welcome1</property>

What Are Transient And Durable BPEL Processes?
Durable:- It is a long-running process and initiated through a one-way invocation and do incur one or more dehydration points in the database during execution Ex: Asynchronous 
-> Long-running process
-> One way process
-> Dehydration take place
-> Ex: Asynchrounous

Transient:- It is short-lived process, request-response style processes and do not incur dehydration during their process execution Ex: Synchronous.
-> Short-lived process
-> Request-response process
-> Dehydration doesn’t take place
-> Ex: Synchronous 

Q) How Can We Make A Partner Link Dynamic?
"If we have to send the request to different service which has the same WSDL then dynamic partner link will be used and using addressing schema we can set the endpoint dynamic to send the request to the desired service."

Q) What Is Ha File And Ftp Adapters?
In the clustered environment, File and FTP adapters should be used as HA (High-Availability) 
Inbound: It is controlled by Control Files and avoids the race between the manages servers in reading the files where the reference of the files read by the managed servers will be maintained in the control directory. 
Outbound: It is controlled by DB Mutex table exist in the SOA dehydration store and this avoids duplicated been written to the same file when all the managed servers in the clusters process the same messages

Q) What Is A Xa Data Source? How It Differs From A Non-xa Data Source?
 XA transaction
An XA transaction involves a coordinating transaction manager, with one or more databases (or other resources, like JMS) all involved in a single global transaction. 
->  Global Transaction
->  More than one resource involves
->  Ex: transfer money and paying tax to other DB.
Non-XA transactions
Non-XA transactions have no transaction coordinator, and a single resource is doing all its transaction work itself (this is sometimes called local transactions).
->  Local transaction
->  Involve only one resource
->  Ex: Transaction between  two DB(source-target), like transfer money

Q) How Does A Async Request Run In The Backend?
The sequences of events involved in the delivery of invoking messages are as follows: 
o    The client posts the message to the delivery service. 
o    The delivery service saves the invocation message to the dlv_message table. The initial state of the message is 0 (undelivered). 
o    The delivery service schedules a dispatcher message to process the invocation message asynchronously. 
o    The dispatcher message is delivered to the dispatcher through the afterCompletion() call. Therefore, the message is not delivered if the JTA transaction fails. 
o    The dispatcher sends the JMS message to the queue. Places a very short JMS message in the in-memory queue (jms/collaxa/BPELWorkerQueue) in OC4J JMS. The small JMS message triggers the Worker Bean in the downstream step. 
o    This message is then picked up by a Worker Bean MDB, which requests the dispatcher for work to execute. If the number of Worker Bean MDBs currently processing activities for the domain is sufficient, the dispatcher module may decide not to request another MDB. 
o    MDB fetches the invocation message from the dispatcher. 
o    MDB passes the invocation message to Oracle BPEL Server, which updates the invocation message state to 1 (delivered), creates the instance, and executes the activities in the flow until a breakpoint activity is reached.

Q) What happens when a one-way message comes to SOA Server and

What happens when a sync message comes?

A One-way invocation

Offers several advantages in terms of scalability,

 Because the service engine’s thread pool (invoker threads) executes when a thread is available. 

However, the disadvantage is that there is no guarantee that it executes immediately. 

How to ensure that the one way invocations are synchronous. 

If you require a synchronous-type call based on a one-way operation, then you can use the one-way Delivery Policy property

Specify this in composite.xml 

The following values are possible.

async.persist: Messages are persisted in the database hash map.
async.cache: Messages are stored in memory.
sync: Direct invocation occurs on the same thread.

Using the sync option offers the best performance, but the requestor will perceive that it took longer to post the message due to the increased coupling between the requestor and the target. 

Similarly, using the async.cache option reduces the performance the overhead of storing the message in memory.

However, if the server fails before the message is processed, it will be lost as it is stored in memory.

Q) What Are Dspmaxthread And Recieverthread Properties? Why Are They Important?
Receiver Threads property specifies the maximum number of MDBs that process Async across all domains. 
Whereas the dspMaxThread are the maximum number of MDBs that process Async and threads that operate across a domain.
 So, we need to ensure that the dspMaxThread value is not greater than Receiver Threads.

Conclusions
"BPM is a discipline for management of atomic business process whose functional boundaries are well-defined rather than management 
of long-running end-to-end business processes that fulfill an end-to-end business process that spans across multiple disparate systems."

BPEL is a sub-set of the Business Process Management discipline

BPEL bridges the gap between BPM and SOA. BPEL does what BPM cannot do on its own i.e. execute/orchestrate processes across department/functional systems in a standard way while at the same time providing

Visibility & Control to the Business Process Owner
.
BPEL is not good at handling complex workflows that involve a lot of human tasks (work-items) and which demand quick turn-around times.

EAI can fill in the cross-system workflow gap to some extent but it does not give visibility and control to the Process Owners.

BPM tools are not suitable for high-volume transaction processing while most BPEL tools are.

Recommendations
Use BPM for inter-departmental function workflows/processes that are centered around human tasks and documents

Orchestrate end-to-end, cross-system processes using a BPEL engine.

In a green-field project, if all functional workflows are going to be built from scratch, then BPM can be used for process management and process-integration instead of BPEL but if more than one packaged application/system is going to be used in the project, then using BPEL along with BPM is a better option.

Use BAM in tandem with BPM and BPEL for end-to-end process visibility.

Q) What Are Dspmaxthread And Recieverthread Properties? Why Are They Important?
Receiver Threads property specifies the maximum number of MDBs that process Async across all domains. 
Whereas the dspMaxThread are the maximum number of MDBs that process Async and threads that operate across a domain.
 So, we need to ensure that the dspMaxThread value is not greater than Receiver Threads.

Q) How To Increase The Transaction Timeouts In Soa?
For the transaction timeout needs to be increased, all the below settings timeout value needs to be changed to the expected Timeout value. 

1. JTA 
2.    Engine Bean 
3.    Delivery Bean 

 Q) Is It Possible To Use Ms Sql Server As Dehydration Store With Soa Suite ?if Yes How?

Yes, it is possible. 
To automatically maintain long-running asynchronous processes and their current state information in a database while they wait for asynchronous callbacks, you use a database as a dehydration store. Storing the process in a database preserves the process and prevents any loss of state or reliability if a system shuts down or a network problem occurs. This feature increases both BPEL process reliability and scalability. You can also use it to support clustering and failover.

Q) What Is Soa Governance? What Are Its Functions?
Service-Oriented Architecture (SOA) governance is a concept used for activities related to exercising control over services in an SOA.
Some key activities that are often mentioned as being part of SOA governance are: 
Managing the portfolio of services: This includes planning the development of new services and updating current services.
Managing the service lifecycle: This is meant to ensure that updates of services do not disturb current services to the consumers.
Using policies to restrict behavior: Consistency of services can be ensured by having the rules applied to all the created services.
Monitoring performance of services: The consequences of service downtime or underperformance can be severe because of service composition. Therefore action can be taken instantly when a problem occurs by monitoring service performance and availability.

Q) What Is Singleton Property In Soa?

"In the clustered environment when the processing of the message should happen via only one SOA managed server, 
then the property singleton needs to be defined at the adapter level."

Q) What Do You Mean By Non-idempotent Activity?
Which All Activities Are Non-idempotent By Default?

"Activities like Pick, Wait, receive, reply and checkpoint() are called non-Idempotent activity and during the execution of the process
 whenever these activities are encountered then it gets dehydrated to the dehydration store"

Q) How Can We Make A Partner Link Dynamic?
If we have to send the request to a different service that has the same WSDL then a dynamic partner link will be used and using addressing schema we can set the endpoint dynamic to send the request to the desired service.

Q) What Is Web Service?
Web services are application components, which are self-contained and self-describing and provide services based on the open protocol communication (i.e. SOAP UI, HTTP over the net). 

HumanTask
To create users:
console -> Security Realms -> myrealm -> Users and Groups -> new -> name,password,confirmPassword: -> userCreated successfully

Genral
Data 
Assignment 
Presentation 
Deadlines  
Notification
Access  
Events
Documents

Q) How can you make a request dynamic?
"If you are on the lookout to send a request to a variety of services that have the same value of WSDL, then the application has to make in a dynamic manner. You can make the requested dynamically by making use of the addressing schema. In this manner, the desired objectives can be met in an efficient way."

external web service calls via references.: when target service is taking time.
http.connTimeout
http.readTimeout

jca.retry.count
jca.retry.interval

SyncMaxWaitTime 

<property name="inMemoryOptimization">true</property>
<property name="completionPersistPolicy">faulted</property>
<property name="completionPersistLevel">all</property>

case1: If BPEL Process is Async or one-way process then, the Delivery policy attribute can have three values. 
Those are 1)- async.persit  2)- async.cache 3)- sync
async.persist: Messages are persisted in the database hash map.
async.cache: Messages are stored in memory.
sync: Direct invocation occurs on the same thread.

case2: If BPEL Process is Sync process then, the transaction context between client and bpel-process is controlled by "Transaction" Attribute.
 Values of these attributes are 1)- Required 2)- Requires New

Parallel Flow in a BPEL Process
Parallel flows are especially useful when you must perform several time-consuming and independent tasks.
A BPEL process service component must sometimes gather information from multiple asynchronous sources.
 Because each callback can take an undefined amount of time (hours or days), it may take too long to call each service one at a time. 
By breaking the calls into a parallel flow, a BPEL process service component can invoke multiple web services at the same time,
and receive the responses as they come in. This method is much more time-efficient.

Compensate
It is used to invoke a compensating sequence of activities as a result of a fault or execution of a compansate handler.

Throttling
When you work with inbound JCA adapters (for instance JMS, AQ, MQ) in SOA Suite, you need to be able to control the TPS (transactions per second). 
This will allow you to avoid stuck threads, in case you receive unexpected batch messages in the domain load tuning.

1. Add the minimumDelayBetweenMessages in the code
This is the parameter that will do the magic. It adds a thread sleep as part of the instance execution—that is, on a per polling thread basis. The setting is measured in milliseconds.

2. How to do that?
The property will be added in the SCA file (composite.xml) in your project. It is a <Service> properties related with the partner link adapter that connects in your JCA resource.

Note: it is not a binding property.  

When you deploy your code in the SOA server, you will be able to check the value in the JCA adapter using EM console. You can also change it on the fly, in case you need to tune the value and test it.

3. How to calculate the TPS
Consider a scenario where you need to control throttling in a domain that consists of:
·         4 managed servers nodes
·         2 threads per adapter (set in InboundThreadCount parameter)
·         500 milliseconds (0.5 second) set in minimumDelayBetweenMessages

To calculate the TPS, use this formula:
(MS nodes * thread count) / (minimumDelay/1000)
"In this example, you will have (4 * 2) = 8 threads connected in the queue. Each thread will delay 500/1000 = 0.5 second 
before pushing a new message. You will have a max 16 TPS."

This means that when you set the minimumDelay, you will also set the maximum TPS. 
·         This can be lower in case your thread transaction execution time is higher than the minimumDelay parameter.

Q). I have pushed one message in queue and before consuming my server went down, so what will you do, not to lose data?

This can be achieved my using session acknowledgement. 

. For this first modify your producer code to use Session.AUTO_ACKNOWLEDGE.

While creating Queue session, make AUTO_ACKNOWLEDGE as false.

That means consumer has to acknowledge. 

When the consumer sends acknowledgement of message, then the message will be deleted from the queue, Otherwise it will remain in the queue. 

On the consumer side you have to do the same thing, create a queue session with AUTO_ACKNOWLEDGE as false.

Convert xsd:anyType data into XML Data 

Create 2 variables to which you want to convert any type of Data. 

Use the following function to Parse the value:oraext:parseXML(bpws:getVariableData('inputVariable','request','/ns2:Request/ns2:Details/ns2:xmlData')) where xmlData is of datatype Any.

Q). Calling Secure Services from BPEL

There are two popular ways of securing the so-called webservices and as to how do we invoke them from BPEL??? 

BasicServices: Basic services would require authentication information i.e., the username/password to be passed in the HTTP Header.

Services Pertaining to WS-Security: These services are required to send authentication information (username / password) as WS-Security tokens in SOAP Envelope to access.

Scope

 a scope in BPEL can be used to group together a block of activities and limit the visibility of variables declared within that scope. 

Q) We have three service Flight Service Hotel Service and Cab service, If any of them couldn’t  book so what you will do to rollback all the action?

ans:- Compensate activity. And compensateScopeActivity 

This helps to do a business rollback.  

"Compensation occurs when a process cannot complete several operations after completing others.

The process must return and undo the previously completed operations." 

Transactions which are committed during the business process execution that requires a business rollback can be accomplished using the compensation handler.  

A compensation handler helps to rollback the transactions that is completed in the previous steps of the BPEL process. 

We have attached the compensation which is nothing but the calling of cancellation service.

Catch block will catch the error and then run the compensation handler. 

This compensation handler is nothing but executing all the compensation handlers, till the point transaction got failed.   

Compensation handler

Compensation handlers contain the activities that need to be executed as part of the compensation flow. 

These handlers are defined per scope, similar to catch blocks.

Per the scope, you need to decide if you need a compensation handler. 

Compensate activity

The compensate* activity was called after the scope finished successfully.

This activity enables you to start compensation on a specified inner scope that has already completed successfully. 

This activity must only be used from within a fault handler, another compensation handler, or a termination handler. 

Compensate activities can only be executed from catch blocks and compensation handlers. 

Compensation activities either trigger compensation for all enclosed and completed scopes using the compensate activity (supported in BPEL 1.1 and 2.0), 

 or can trigger compensation for one specific scope using the compensateScope activity (only BPEL 2.0). 

Compensation handlers can only be defined on scope level, not on sequence level.

 


No comments:

Post a Comment

SOA Overview Part-1

  Middleware It provides a mechanism for the process to interact with other processes running on multiple network machines. Advantages...