Gartner Cloud DBMS Report Names MarkLogic a Visionary

All Posts By

Hemant Puranik

Hemant Puranik is a Technical Product Manager at MarkLogic, with a specific focus on data engineering tooling. He is responsible for creating and managing field ready materials that includes best practices documentation, code and scripts, modules for partner products, whitepapers and demonstrations of joint solutions - all this to accelerate MarkLogic projects when using ETL, data cleansing, structural transformation and other data engineering tools and technologies.

Prior to MarkLogic, Hemant worked more than a decade at SAP and BusinessObjects as a Product Manager and Engineering Manager on a variety of products within the Enterprise Information Management product portfolio which includes text analytics, data quality, data stewardship, data integration and master data management.

December 21, 2020
すでにSparkを使っていて、さらにオペレーショナルデータハブを構築したい場合、MarkLogicのようなトランザクショナルなデータベースが必要です。MarkLogicはオペレーショナルデータハブとして優れた能力を発揮しますが、SparkをMarkLogicと一緒に利用した場合、このオペレーショナルデータハブはさらにパワフルになります。
You may already be using Spark, and now you want to build an Operational Data Hub. But, it’s impossible without a transactional database — and that’s where MarkLogic comes in. MarkLogic works very well as an Operational Data Hub. When Spark works alongside MarkLogic, the Operational Data Hub is even more powerful.
 
By now you may have heard that Apache Spark is the fastest growing project in open source ‘Big Data’ community. Spark does not include its own distributed data persistence technology but can work with any Hadoop-compatible data formats. You can use the MarkLogic Connector for Hadoop as an input source to Spark and take advantage of the Spark framework to develop your ‘Big Data’ applications on top of MarkLogic.
This website uses cookies.

By continuing to use this website you are giving consent to cookies being used in accordance with the MarkLogic Privacy Statement.