10 March 2016

KIE Server Apache Thrift extension

I am using the JBoss KIE Server and Drools for some time now in the Java Domain, telling my colleagues, who are mostly PHP experts, how great it is to get the spaghetti code out of my imperative code base. After getting on there nerves one too many times, they nailed me to give a presentation on the subject and find a way for letting the PHP fraction benefit from this great piece of technology.
Since i already introduced the Apache Thrift(1) protocol (without the underlying transport part of the Thrift stack) a while ago for binary integration with Java based micro-services, it seemed natural to extend the KIE Server ReST transport with Apache Thrift. The JBoss Principal Software engineer Maciej Swiderski wrote a great blog(2) about the new possibilities for extending the KIE Server.

So why add Apache Thrift to the equation, since we already have JSON/XML and even SOAP right out of the KIE Server box?
  • Speed
    • very few bytes on the wire, cheap (de)serialization  (think large object graphs)
  • Validation
    •  encoding the semantics of the business objects once in a Thrift IDL scheme for all services, for all supported languages, in a object oriented typed manner (as supported by the target language)
  • Natural language bindings
    •  for example, Java uses ArrayList. C++ uses std::vector
  • Backward Compatibility
    •  The protocol is allowed to evolve over time, with certain constraints, without breaking already running implementations

There are a number of Protocols to choose from within Apache Thrift, but for optimal performance there is only one: TCompactProtocol. It is the most compact binary format which is typically more efficient to process than the other protocols.

The project is published on github(3) and consists mainly of  two parts. The org.kie.server.thrift repo and the thrift-maven-plugin repo. Please build the thrift-maven-plugin first as it is a dependency for the server. It contains the Thrift compiler version 0.9.2 for Windows and Linux (tested on Centos / RHEL) for compiling the Thrift IDL files.
The org.kie.server.thrift repo dowloads the KIE Server war file, extracts it, adds the Thrift extension and repackages the sources into a new war file ready for deployment. Tested on Wildfly 8.2.1.
How to setup a KIE server and accompanying workbench is explained under (4).
Test facts and rules with matching PHP and Java clients are also provided under (3).

Workflow



Workflow

Architecture

From the view point of the KIE Server there is a known and a unknown model. The known model consists of the objects used internaly by the KIE Server to handle its commands (Command pattern). These objects are mirrored to IDL in the kie-server-protocol maven module to make them available to all Thrift supported languages. The unknown model is ofcourse the graph of objects that needs to be transported into the KIE core engine for execution. The unknown model must also be designed with the Thrift IDL so all objects that have to pass the Thrift protocol layer are of type TBase. These two object models force a sequential two step (de)serialization.
In the first step the known model is (de)serialized revealing the KIE Server internal objects. This gets handled by the Thrift message reader and writer class that get registered on the resteasy framework, as used by Wildfly, for the application/xthrift content type.  These known objects contain binary fields holding the unknown object bytes.
For the second deserialization I am forced to use a not so smooth a trick. Since there is no way to tell by the bytes what type is represented (due to the compactness of  the Thrift TCompactProtocol which does not deliver a class name like xstream),  the fully qualified Java class name from the IDL generated objects must be provided within the transporting known KIE Server objects. Now the (de)serialisation can take place using the classloader from the deployed KIE Container holding the unknown object model.  On the client side deserialisation after reply is easy as the models are both known.
To allow other languages the use of Java objects like BigDecimal, which is great for monetary calculations, there are integrated converters with Thrift IDL representations in the maven kie-server-java module to ease development. If such a TBase Java representation is not wrapped within another struct it is converted automaticaly. Wrapped represenrations (forced to bind to a TBase type) can make use of the static conversion helper methods.

Please study the source code for further details on the combination of technologies used.

Acknowledgments

HMM Deutschland GmbH for investing work-time into the project
Maciej Swiderski for his informative blog http://mswiderski.blogspot.de
My colleague Alexander Knyn, for being my PHP integration sparring partner

Links:


(1) Apache Thrift
(2) Extending the KIE Server

(3) KIE Server Apache Thrift extension
(4) Installing KIE Server and Workbench 

2 June 2015

Remote KIE Server, XStream and Hibernate Collections

If it is not possible or wanted to add the Hibernate-Core package to the classpath of the remote KIE Server in order to deserialize org.hibernate.collection.internal.PersistentBag and other Hibernate enchanged collections then XStream offers a solution. The xstream-hibernate project offers support to drop the internals of Hibernate enhanced collections, proxied types and Hibernate's collection proxies when marshalling such objects to XStream. All converters and the mapper have to be registered for the XStream instance:
final XStream xstream = new XStream() {
protected MapperWrapper wrapMapper(final MapperWrapper next) {
return new HibernateMapper(next);
}
};
// configure XStream for KIE!
xstream = XStreamXML.newXStreamMarshaller(xStream);
xstream.registerConverter(new HibernateProxyConverter());
xstream.registerConverter(new HibernatePersistentCollectionConverter(xstream.getMapper()));
xstream.registerConverter(new HibernatePersistentMapConverter(xstream.getMapper()));
xstream.registerConverter(new HibernatePersistentSortedMapConverter(xstream.getMapper()));
xstream.registerConverter(new HibernatePersistentSortedSetConverter(xstream.getMapper()));

Maven:
<dependency>
<groupid>com.thoughtworks.xstream</groupid>
<artifactid>xstream-hibernate</artifactid>
<version>1.4.8</version>
</dependency>

10 May 2015

Domain Extensions for Data Modeller

Drools: Drools workbench is getting domain extensions to the Data Modeller. This will allow different domains to augment the model - such as custom annotations for JPA or OptaPlanner. No more XML mapping!

23 February 2015

Setting up Drools Workbench and Execution Server (6.2.0.CR4)

Get Wildfly-8.1.0.Final.zip and unzip it into a directory. Then add three user names before start up.
Use the /bin/add-user.sh (or .bat) script to add three users.
These settings by default are stored in:

/standalone/configuration/mgmt-users.properties and application-users.properties

with the passwords hashed.

Use the $WILDFLY_HOME/bin/add-user.sh (or .bat for Windows) script to add three users.
The first user will become Admin on Wildfly and will be stored in mgmt-users.properties:
*What type of user do you wish to add?
Insert a
*Enter the details of the new user to add.
Username: admin
Password: xxxxxxxx
*What groups do you want this user to belong to? (Please enter a comma separated list,
or leave blank for none)[ ]:
Just leave this blank by hitting enter
*Is this new user going to be used for one AS process to connect to another AS process?
e.g. for a slave host controller connecting to the master or for a Remoting connection
for server to server EJB calls.
yes/no?
Insert no

Now we have our Wildfly Admin. Next up is the Drools Workbench user with admin rights.
*What type of user do you wish to add?
Insert b
*Enter the details of the new user to add.
Username: wb-user
Password: xxxxxxxx
*What groups do you want this user to belong to? (Please enter a comma separated list,
or leave blank for none)[ ]:
Insert admin
*About to add user 'wb-user' for realm 'ApplicationRealm'
Is this correct yes/no?
Insert yes
*Is this new user going to be used for one AS process to connect to another AS process?
e.g. for a slave host controller connecting to the master or for a Remoting connection for
server to server EJB calls.
yes/no?
Insert no

So, this is our Drools Workbench user. The last user is needed for the Drools execution server.
*What type of user do you wish to add?
Insert b
*Enter the details of the new user to add.
Username: ks-user
Password: xxxxxxxx
*What groups do you want this user to belong to? (Please enter a comma separated list,
or leave blank for none)[ ]:
Insert kie-server
*About to add user 'wb-user' for realm 'ApplicationRealm'
Is this correct yes/no?
Insert yes
*Is this new user going to be used for one AS process to connect to another AS process?
e.g. for a slave host controller connecting to the master or for a Remoting connection for
server to server EJB calls.
yes/no?
Insert no

To use one user for Drools Workbench and Execution server, combine the groups.

Start Wildfly using /bin/standalone.sh (or .bat for Windows) and browse to
http://127.0.0.1:9990 and log in as Wildfly admin.
Now we need to download the Drools Workbench and Execution server .war files.
They are locared in the Jboss Release repository under:
https://repository.jboss.org/nexus/index.html#view-repositories;releases~browsestorage.
Navigate in the lower panel to org/kie/kie-drools-wb-distribution-wars/6.2.0.CR4
and download the wildfly8.war file. For the Kie server navigate to
org/kie/kie-server-distribution-wars/6.2.0.CR4 and download the
kie-server-distribution-wars-6.2.0.CR4-ee6.war or -ee7.war file according to the
Java Enterprise version in use. Use the Wildfly Management to load and enable the two war files.
The Wildfly log files can be found under /standalone/log.
Login to the Drools Workbench user the second user:
http://localhost:8080/kie-drools-wb-distribution-wars-6.2.0.CR4-wildfly8/
(I have issues loading the Workbench in Chrom and Firefox, but IE seems to work fine)
All that is left is to test the Kie server deployment, type the following url into the browser
and login using user number 3:
http://localhost:8080/kie-server-distribution-wars-6.2.0.CR4-ee7/services/rest/server
If succesfull you wil get the following response:

<response msg="Kie Server info" type="SUCCESS">
<kie-server-info>
<version>6.2.0.CR4</version>
</kie-server-info>
</response>

The Execution server is registered with Drools Workbench under menu item Deploy/Rule Deployments and then Register (on the right of the Window).
Endpoint:
http://localhost:8080/kie-server-distribution-wars-6.2.0.CR4-ee7/services/rest/server
Name:
container-name
Username
ks-user
Password
xxxxxxxx
and Connect.

This will create a named Container to deploy and execute the Rules.

Have fun!

26 August 2014

OpenCV, Java and CLAHE

As it seems that the OpenCV guys have missed porting the creation of the Contrast Limited Adaptive Histogram Equalization algorithm to Java in OpenCV 2.4.9, so my colleage developer Michael Niephaus and myself have implemented a version in Java using the objects from the OpenCV  project.

import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.Range;
import org.opencv.core.Rect;
import org.opencv.core.Size;
import org.opencv.imgproc.Imgproc;

public class CLAHE {

private final static int BUFFER_SIZE = 256;
private final double clipLimit_;
private final int tilesX_;
private final int tilesY_;
Mat lut_ = new Mat();

public CLAHE() {
this(40, 8, 8);
}

public static int saturateCast(int x) {
return x > BUFFER_SIZE - 1 ? BUFFER_SIZE - 1 : (x < 0 ? 0 : x);
}

public static int saturateCast(float x) {
return (int) (x > BUFFER_SIZE - 1 ? BUFFER_SIZE - 1 : (x < 0 ? 0 : x));
}

public int GCD(int a, int b) {
return b == 0 ? a : GCD(b, a % b);
}

public CLAHE(double clipLimit, int tilesX, int tilesY) {
this.clipLimit_ = clipLimit;
this.tilesX_ = tilesX;
this.tilesY_ = tilesY;
}

public Mat apply(Mat src) {
if (src.type() != CvType.CV_8UC1) {
throw new IllegalArgumentException("Mat not of type CV_8UC1!");
}
Mat dst = new Mat(src.size(), src.type());
lut_.create(tilesX_ * tilesY_, BUFFER_SIZE, CvType.CV_8UC1);

Size tileSize;
Mat srcForLut;

if (src.cols() % tilesX_ == 0 && src.rows() % tilesY_ == 0) {
tileSize = new Size(src.cols() / tilesX_, src.rows() / tilesY_);
srcForLut = src;
} else {
Mat srcExt_ = new Mat();
Imgproc.copyMakeBorder(src, srcExt_, 0, tilesY_ - (src.rows() % tilesY_), 0, tilesX_ - (src.cols() % tilesX_), Imgproc.BORDER_REFLECT_101);
tileSize = new Size(srcExt_.cols() / tilesX_, srcExt_.rows() / tilesY_);
srcForLut = srcExt_;
}

double tileSizeTotal = tileSize.area(); // int ?
float lutScale = (float) ((BUFFER_SIZE - 1) / tileSizeTotal); // why BUFFER_SIZE - 1 ?
int clipLimit = 0;
if (clipLimit_ > 0.0) {
clipLimit = (int) (clipLimit_ * tileSizeTotal / BUFFER_SIZE);
if (clipLimit < 1) {
clipLimit = 1;
}
}
CLAHE_CalcLut_Body calcLutBody = new CLAHE_CalcLut_Body(srcForLut, lut_, tileSize, tilesX_, clipLimit, lutScale);
calcLutBody.execute(new Range(0, tilesX_ * tilesY_));

CLAHE_Interpolation_Body interpolationBody = new CLAHE_Interpolation_Body(src, dst, lut_, tileSize, tilesX_, tilesY_);
interpolationBody.execute(new Range(0, src.rows()));

return dst;
}

private class CLAHE_Interpolation_Body {

Mat src_;
Mat dst_;
Mat lut_;
Size tileSize_;
int tilesX_;
int tilesY_;

CLAHE_Interpolation_Body(Mat src, Mat dst, Mat lut_, Size tileSize, int tilesX_, int tilesY_) {
this.src_ = src;
this.dst_ = dst;
this.lut_ = lut_;
this.tileSize_ = tileSize;
this.tilesX_ = tilesX_;
this.tilesY_ = tilesY_;
}

void execute(Range range) {
int lut_step = (int) lut_.step1();
int lut_break = tilesX_ * lut_step;

for (int y = range.start; y < range.end; ++y) {

float tyf = (y / (float) tileSize_.height) - 0.5f;
int ty1 = (int) Math.floor(tyf);
int ty2 = ty1 + 1;
float ya = tyf - ty1;
// keep largest
if (ty1 < 0) {
ty1 = 0;
}
// keep smallest
if (ty2 > tilesY_ - 1) {
ty2 = tilesY_ - 1;
}

int lutPlane1 = ty1 * tilesX_;
int lutPlane2 = ty2 * tilesX_;

for (int x = 0; x < src_.cols(); x++) {

float txf = (x / (float) tileSize_.width) - 0.5f;
int tx1 = (int) Math.floor(txf);
int tx2 = tx1 + 1;
float xa = txf - tx1;
// keep largest
if (tx1 < 0) {
tx1 = 0;
}
// keep smallest
if (tx2 > tilesX_ - 1) {
tx2 = tilesX_ - 1;
}
// original pixel value
double[] ptr = src_.get(y, x);
int srcVal = (int) ptr[0];

int ind1 = tx1 * lut_step + srcVal;
int ind2 = tx2 * lut_step + srcVal;

int column1 = (ind1 + (ty1 * lut_break)) % lut_step;
int row1 = (ind1 + (ty1 * lut_break)) / lut_step;

int column2 = (ind2 + (ty1 * lut_break)) % lut_step;
int row2 = (ind2 + (ty1 * lut_break)) / lut_step;

int column3 = (ind1 + (ty2 * lut_break)) % lut_step;
int row3 = (ind1 + (ty2 * lut_break)) / lut_step;

int column4 = (ind2 + (ty2 * lut_break)) % lut_step;
int row4 = (ind2 + (ty2 * lut_break)) / lut_step;

float res = 0;

double[] lut_ptr1 = lut_.get(row1, column1);
res += ((byte) lut_ptr1[0] & 0xFF) * ((1.0f - xa) * (1.0f - ya));

double[] lut_ptr2 = lut_.get(row2, column2);
res += ((byte) lut_ptr2[0] & 0xFF) * ((xa) * (1.0f - ya));

double[] lut_ptr3 = lut_.get(row3, column3);
res += ((byte) lut_ptr3[0] & 0xFF) * ((1.0f - xa) * (ya));

double[] lut_ptr4 = lut_.get(row4, column4);
res += ((byte) lut_ptr4[0] & 0xFF) * ((xa) * (ya));

dst_.put(y, x, saturateCast(res));
}
}
}
}

private class CLAHE_CalcLut_Body {

Mat src_;
Mat lut_;
Size tileSize_;
int tilesX_;
int clipLimit_;
float lutScale_;

CLAHE_CalcLut_Body(Mat srcForLut, Mat lut_, Size tileSize, int tilesX_, int clipLimit, float lutScale) {
this.src_ = srcForLut;
this.lut_ = lut_;
this.tileSize_ = tileSize;
this.tilesX_ = tilesX_;
this.clipLimit_ = clipLimit;
this.lutScale_ = lutScale;
}

void execute(Range range) {
int[] tileHist;
int[] lutBytes = new int[lut_.height() * lut_.width()];
for (int k = range.start; k < range.end; ++k) {
int ty = k / tilesX_;
int tx = k % tilesX_;
// retrieve tile submatrix
Rect tileROI = new Rect();
tileROI.x = (int) (tx * tileSize_.width);
tileROI.y = (int) (ty * tileSize_.height);
tileROI.width = (int) tileSize_.width;
tileROI.height = (int) tileSize_.height;
Mat tile = src_.submat(tileROI);
// calc histogram
tileHist = new int[BUFFER_SIZE];
int height = tileROI.height;

for (int h = height; h > 0; h--) {
int x;
double[] ptr;
for (int w = 0; w < tileROI.width; w++) {
ptr = tile.get(h - 1, w);
tileHist[(int) ptr[0]]++;
}
}
// clip histogram
if (clipLimit_ > 0) {
// how many pixels were clipped
int clipped = 0;
for (int i = 0; i < BUFFER_SIZE; ++i) {
if (tileHist[i] > clipLimit_) {
clipped += tileHist[i] - clipLimit_;
tileHist[i] = clipLimit_;
}
}
// redistribute clipped pixels
int redistBatch = clipped / BUFFER_SIZE;
int residual = clipped - redistBatch * BUFFER_SIZE;
for (int i = 0; i < BUFFER_SIZE; ++i) {
tileHist[i] += redistBatch;
}
for (int i = 0; i < residual; ++i) {
tileHist[i]++;
}
}
// calc Lut
int sum = 0;
for (int i = 0; i < BUFFER_SIZE; ++i) {
sum += tileHist[i];
lut_.put(k, i, saturateCast(Math.round(sum * lutScale_)));
}
}
}
}
}

//  DISCLAIMER :
// This software is provided "as is".
// Any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the blog writer be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.

4 January 2014

OpenCV CMake Java Maven OSGi toolchain

OpenCV is a cross platform computer vision library written in c / c++ with a Java interface. The first thing to solve is the generation and compilation of the sources using CMake and Maven. The Maven plugin  cmake-maven-project does great things here. Now we can add this as a dependency to our OpenCV Maven project and use it for cross platform compilation.
Download the OpenCV sources from github and add them to your OpenCV Maven project. The OpenCV-CMake make-files produce, depending on platform compiling, a .dll or .so file and the Java source files. These are copied into src/main/resources and src/main/java respectively. Now we are ready for the Maven Bundle plugin.
OSGi has a easy way to resolve dependencies on native code called the bundle native code tag. Point this to the .dll / .so file in resources. Add a bundle activate class to the package, calling System.loadLibrary, and your of deploying OpenCV to OSGi.