Main point: Billions of rows X millions of columns

HBase-How-to-Use-Apache-HBase-REST-Interface

Key Features:

  • Modeled after Google’s BigTable
  • Uses Hadoop’s HDFS as storage
  • Map/reduce with Hadoop
  • Query predicate push down via server side scan and get filters
  • Optimizations for real time queries
  • A high performance Thrift gateway
  • HTTP supports XML, Protobuf, and binary
  • Jruby-based (JIRB) shell
  • Rolling restart for configuration changes and minor upgrades
  • Random access performance is like MySQL
  • A cluster consists of several different types of nodes
HBase-Coprocessor-Introduction HBase-Bulk-Loading

Best used: Hadoop is probably still the best way to run Map/Reduce jobs on huge datasets. Best if you use the Hadoop/HDFS stack already.

Examples: Search engines. Analysing log data. Any place where scanning huge, two-dimensional join-less tables are a requirement.
Regards,
Alok In case you need Certified Salesforce Consultant for any Salesforce related work, then please feel free to reach out to sales@girikon.com

About Author
Share this post on: