Traditionally, business-rule applications process a few megabytes of data as client server requests, one record at a time. Generally, this works fine, but as solutions are increasingly becoming cloud based, the data is measured in terabytes, not megabytes. Apache Hadoop, a distributed data processing framework, evolved to meet these demands. But for data scientists familiar with Hadoop, another requirement has emerged: being able to model, build, and test decisions quickly against these large data sets. This is where Decision Composer and its graphical decision modelling environment stands out.
By combining Decision Composer and Hadoop on IBM Cloud, you can scale your rule solutions up to the world of big data and rapidly develop decision models to analyse large data sets in the cloud without any coding.
This tutorial provides the glue you need to integrate decision models created in Decision Composer and run these models against big data on Hadoop.